Test Report: KVM_Linux_containerd 21724

                    
                      360d9e050a05bd2ed6961537be9e77a8ddcd2d56:2025-10-13:41891
                    
                

Test fail (21/324)

x
+
TestAddons/serial/Volcano (375.04s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 33.943973ms
addons_test.go:876: volcano-admission stabilized in 33.988943ms
addons_test.go:868: volcano-scheduler stabilized in 34.120713ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-2ftbx" [8a6a9af2-1806-4afe-9eae-7268a53a5316] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
addons_test.go:890: ***** TestAddons/serial/Volcano: pod "app=volcano-scheduler" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:890: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
addons_test.go:890: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-10-13 14:08:20.834213412 +0000 UTC m=+791.774771790
addons_test.go:890: (dbg) Run:  kubectl --context addons-214022 describe po volcano-scheduler-76c996c8bf-2ftbx -n volcano-system
addons_test.go:890: (dbg) kubectl --context addons-214022 describe po volcano-scheduler-76c996c8bf-2ftbx -n volcano-system:
Name:                 volcano-scheduler-76c996c8bf-2ftbx
Namespace:            volcano-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      volcano-scheduler
Node:                 addons-214022/192.168.39.214
Start Time:           Mon, 13 Oct 2025 13:56:17 +0000
Labels:               app=volcano-scheduler
pod-template-hash=76c996c8bf
Annotations:          <none>
Status:               Pending
SeccompProfile:       RuntimeDefault
IP:                   10.244.0.19
IPs:
IP:           10.244.0.19
Controlled By:  ReplicaSet/volcano-scheduler-76c996c8bf
Containers:
volcano-scheduler:
Container ID:  
Image:         docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34
Image ID:      
Port:          <none>
Host Port:     <none>
Args:
--logtostderr
--scheduler-conf=/volcano.scheduler/volcano-scheduler.conf
--enable-healthz=true
--enable-metrics=true
--leader-elect=false
--kube-api-qps=2000
--kube-api-burst=2000
--schedule-period=1s
--node-worker-threads=20
-v=3
2>&1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
DEBUG_SOCKET_DIR:  /tmp/klog-socks
Mounts:
/tmp/klog-socks from klog-sock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gdfbf (ro)
/volcano.scheduler from scheduler-config (rw)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
scheduler-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      volcano-scheduler-configmap
Optional:  false
klog-sock:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-gdfbf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  12m                  default-scheduler  Successfully assigned volcano-system/volcano-scheduler-76c996c8bf-2ftbx to addons-214022
Normal   Pulling    8m30s (x5 over 12m)  kubelet            Pulling image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
Warning  Failed     8m30s (x5 over 11m)  kubelet            Failed to pull image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34": failed to pull and unpack image "docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     8m30s (x5 over 11m)  kubelet            Error: ErrImagePull
Normal   BackOff    117s (x39 over 11m)  kubelet            Back-off pulling image "docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
Warning  Failed     73s (x42 over 11m)   kubelet            Error: ImagePullBackOff
addons_test.go:890: (dbg) Run:  kubectl --context addons-214022 logs volcano-scheduler-76c996c8bf-2ftbx -n volcano-system
addons_test.go:890: (dbg) Non-zero exit: kubectl --context addons-214022 logs volcano-scheduler-76c996c8bf-2ftbx -n volcano-system: exit status 1 (82.068268ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "volcano-scheduler" in pod "volcano-scheduler-76c996c8bf-2ftbx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:890: kubectl --context addons-214022 logs volcano-scheduler-76c996c8bf-2ftbx -n volcano-system: exit status 1
addons_test.go:891: failed waiting for app=volcano-scheduler pod: app=volcano-scheduler within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/serial/Volcano]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214022 -n addons-214022
helpers_test.go:252: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 logs -n 25: (1.600198681s)
helpers_test.go:260: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-130651 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                          │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ -o=json --download-only -p download-only-459703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                          │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ --download-only -p binary-mirror-039949 --alsologtostderr --binary-mirror http://127.0.0.1:46019 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ -p binary-mirror-039949                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ addons  │ enable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ start   │ -p addons-214022 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 14:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 13:55:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 13:55:20.628679 1815551 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:55:20.628995 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629006 1815551 out.go:374] Setting ErrFile to fd 2...
	I1013 13:55:20.629013 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629212 1815551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 13:55:20.629832 1815551 out.go:368] Setting JSON to false
	I1013 13:55:20.630822 1815551 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":20269,"bootTime":1760343452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:55:20.630927 1815551 start.go:141] virtualization: kvm guest
	I1013 13:55:20.633155 1815551 out.go:179] * [addons-214022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 13:55:20.634757 1815551 notify.go:220] Checking for updates...
	I1013 13:55:20.634845 1815551 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 13:55:20.636374 1815551 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:55:20.637880 1815551 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:55:20.639342 1815551 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:20.640732 1815551 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 13:55:20.642003 1815551 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 13:55:20.643600 1815551 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:55:20.674859 1815551 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 13:55:20.676415 1815551 start.go:305] selected driver: kvm2
	I1013 13:55:20.676432 1815551 start.go:925] validating driver "kvm2" against <nil>
	I1013 13:55:20.676444 1815551 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 13:55:20.677121 1815551 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.677210 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.691866 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.691903 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.705734 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.705799 1815551 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 13:55:20.706090 1815551 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:55:20.706122 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:20.706178 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:20.706190 1815551 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 13:55:20.706245 1815551 start.go:349] cluster config:
	{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:20.706362 1815551 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.708302 1815551 out.go:179] * Starting "addons-214022" primary control-plane node in "addons-214022" cluster
	I1013 13:55:20.709605 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:20.709652 1815551 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 13:55:20.709667 1815551 cache.go:58] Caching tarball of preloaded images
	I1013 13:55:20.709799 1815551 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 13:55:20.709812 1815551 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 13:55:20.710191 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:20.710220 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json: {Name:mkc10ba1ef1459bd83ba3e9e0ba7c33fe1be6a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:20.710388 1815551 start.go:360] acquireMachinesLock for addons-214022: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 13:55:20.710457 1815551 start.go:364] duration metric: took 51.101µs to acquireMachinesLock for "addons-214022"
	I1013 13:55:20.710480 1815551 start.go:93] Provisioning new machine with config: &{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:55:20.710555 1815551 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 13:55:20.713031 1815551 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1013 13:55:20.713207 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:55:20.713262 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:55:20.727020 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I1013 13:55:20.727515 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:55:20.728150 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:55:20.728183 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:55:20.728607 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:55:20.728846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:20.729028 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:20.729259 1815551 start.go:159] libmachine.API.Create for "addons-214022" (driver="kvm2")
	I1013 13:55:20.729295 1815551 client.go:168] LocalClient.Create starting
	I1013 13:55:20.729337 1815551 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem
	I1013 13:55:20.759138 1815551 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem
	I1013 13:55:21.004098 1815551 main.go:141] libmachine: Running pre-create checks...
	I1013 13:55:21.004128 1815551 main.go:141] libmachine: (addons-214022) Calling .PreCreateCheck
	I1013 13:55:21.004821 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:21.005397 1815551 main.go:141] libmachine: Creating machine...
	I1013 13:55:21.005413 1815551 main.go:141] libmachine: (addons-214022) Calling .Create
	I1013 13:55:21.005675 1815551 main.go:141] libmachine: (addons-214022) creating domain...
	I1013 13:55:21.005726 1815551 main.go:141] libmachine: (addons-214022) creating network...
	I1013 13:55:21.007263 1815551 main.go:141] libmachine: (addons-214022) DBG | found existing default network
	I1013 13:55:21.007531 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.007563 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>default</name>
	I1013 13:55:21.007591 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 13:55:21.007612 1815551 main.go:141] libmachine: (addons-214022) DBG |   <forward mode='nat'>
	I1013 13:55:21.007625 1815551 main.go:141] libmachine: (addons-214022) DBG |     <nat>
	I1013 13:55:21.007636 1815551 main.go:141] libmachine: (addons-214022) DBG |       <port start='1024' end='65535'/>
	I1013 13:55:21.007652 1815551 main.go:141] libmachine: (addons-214022) DBG |     </nat>
	I1013 13:55:21.007667 1815551 main.go:141] libmachine: (addons-214022) DBG |   </forward>
	I1013 13:55:21.007675 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 13:55:21.007684 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 13:55:21.007690 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 13:55:21.007709 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.007733 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 13:55:21.007742 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.007750 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.007756 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.007766 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008295 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.008109 1815579 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045c0}
	I1013 13:55:21.008354 1815551 main.go:141] libmachine: (addons-214022) DBG | defining private network:
	I1013 13:55:21.008379 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008393 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.008408 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.008433 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.008451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.008458 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.008463 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.008471 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.008475 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.008480 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.008486 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.014811 1815551 main.go:141] libmachine: (addons-214022) DBG | creating private network mk-addons-214022 192.168.39.0/24...
	I1013 13:55:21.089953 1815551 main.go:141] libmachine: (addons-214022) DBG | private network mk-addons-214022 192.168.39.0/24 created
	I1013 13:55:21.090269 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.090299 1815551 main.go:141] libmachine: (addons-214022) setting up store path in /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.090308 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.090321 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>9289d330-dce4-4691-9e5d-0346b93e6814</uuid>
	I1013 13:55:21.090330 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1013 13:55:21.090340 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:03:10:f8'/>
	I1013 13:55:21.090351 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.090359 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.090366 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.090372 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.090379 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.090384 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.090402 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.090414 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.090424 1815551 main.go:141] libmachine: (addons-214022) building disk image from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 13:55:21.090432 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.090246 1815579 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.090457 1815551 main.go:141] libmachine: (addons-214022) Downloading /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 13:55:21.389435 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.389286 1815579 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa...
	I1013 13:55:21.573462 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573304 1815579 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk...
	I1013 13:55:21.573488 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing magic tar header
	I1013 13:55:21.573505 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing SSH key tar header
	I1013 13:55:21.573516 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573436 1815579 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.573528 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022
	I1013 13:55:21.573596 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 (perms=drwx------)
	I1013 13:55:21.573620 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines
	I1013 13:55:21.573632 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines (perms=drwxr-xr-x)
	I1013 13:55:21.573648 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube (perms=drwxr-xr-x)
	I1013 13:55:21.573659 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975 (perms=drwxrwxr-x)
	I1013 13:55:21.573667 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 13:55:21.573674 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 13:55:21.573684 1815551 main.go:141] libmachine: (addons-214022) defining domain...
	I1013 13:55:21.573701 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.573728 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975
	I1013 13:55:21.573769 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 13:55:21.573794 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins
	I1013 13:55:21.573812 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home
	I1013 13:55:21.573827 1815551 main.go:141] libmachine: (addons-214022) DBG | skipping /home - not owner
	I1013 13:55:21.574972 1815551 main.go:141] libmachine: (addons-214022) defining domain using XML: 
	I1013 13:55:21.574985 1815551 main.go:141] libmachine: (addons-214022) <domain type='kvm'>
	I1013 13:55:21.574990 1815551 main.go:141] libmachine: (addons-214022)   <name>addons-214022</name>
	I1013 13:55:21.575002 1815551 main.go:141] libmachine: (addons-214022)   <memory unit='MiB'>4096</memory>
	I1013 13:55:21.575009 1815551 main.go:141] libmachine: (addons-214022)   <vcpu>2</vcpu>
	I1013 13:55:21.575015 1815551 main.go:141] libmachine: (addons-214022)   <features>
	I1013 13:55:21.575023 1815551 main.go:141] libmachine: (addons-214022)     <acpi/>
	I1013 13:55:21.575032 1815551 main.go:141] libmachine: (addons-214022)     <apic/>
	I1013 13:55:21.575059 1815551 main.go:141] libmachine: (addons-214022)     <pae/>
	I1013 13:55:21.575077 1815551 main.go:141] libmachine: (addons-214022)   </features>
	I1013 13:55:21.575100 1815551 main.go:141] libmachine: (addons-214022)   <cpu mode='host-passthrough'>
	I1013 13:55:21.575110 1815551 main.go:141] libmachine: (addons-214022)   </cpu>
	I1013 13:55:21.575122 1815551 main.go:141] libmachine: (addons-214022)   <os>
	I1013 13:55:21.575132 1815551 main.go:141] libmachine: (addons-214022)     <type>hvm</type>
	I1013 13:55:21.575141 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='cdrom'/>
	I1013 13:55:21.575151 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='hd'/>
	I1013 13:55:21.575162 1815551 main.go:141] libmachine: (addons-214022)     <bootmenu enable='no'/>
	I1013 13:55:21.575179 1815551 main.go:141] libmachine: (addons-214022)   </os>
	I1013 13:55:21.575186 1815551 main.go:141] libmachine: (addons-214022)   <devices>
	I1013 13:55:21.575192 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='cdrom'>
	I1013 13:55:21.575201 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.575208 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.575216 1815551 main.go:141] libmachine: (addons-214022)       <readonly/>
	I1013 13:55:21.575224 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575234 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='disk'>
	I1013 13:55:21.575251 1815551 main.go:141] libmachine: (addons-214022)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 13:55:21.575272 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.575286 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.575296 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575307 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575317 1815551 main.go:141] libmachine: (addons-214022)       <source network='mk-addons-214022'/>
	I1013 13:55:21.575329 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575339 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575356 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575374 1815551 main.go:141] libmachine: (addons-214022)       <source network='default'/>
	I1013 13:55:21.575392 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575408 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575416 1815551 main.go:141] libmachine: (addons-214022)     <serial type='pty'>
	I1013 13:55:21.575422 1815551 main.go:141] libmachine: (addons-214022)       <target port='0'/>
	I1013 13:55:21.575433 1815551 main.go:141] libmachine: (addons-214022)     </serial>
	I1013 13:55:21.575443 1815551 main.go:141] libmachine: (addons-214022)     <console type='pty'>
	I1013 13:55:21.575453 1815551 main.go:141] libmachine: (addons-214022)       <target type='serial' port='0'/>
	I1013 13:55:21.575463 1815551 main.go:141] libmachine: (addons-214022)     </console>
	I1013 13:55:21.575475 1815551 main.go:141] libmachine: (addons-214022)     <rng model='virtio'>
	I1013 13:55:21.575486 1815551 main.go:141] libmachine: (addons-214022)       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.575495 1815551 main.go:141] libmachine: (addons-214022)     </rng>
	I1013 13:55:21.575507 1815551 main.go:141] libmachine: (addons-214022)   </devices>
	I1013 13:55:21.575519 1815551 main.go:141] libmachine: (addons-214022) </domain>
	I1013 13:55:21.575530 1815551 main.go:141] libmachine: (addons-214022) 
	I1013 13:55:21.580981 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:54:97:7f in network default
	I1013 13:55:21.581682 1815551 main.go:141] libmachine: (addons-214022) starting domain...
	I1013 13:55:21.581698 1815551 main.go:141] libmachine: (addons-214022) ensuring networks are active...
	I1013 13:55:21.581746 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:21.582514 1815551 main.go:141] libmachine: (addons-214022) Ensuring network default is active
	I1013 13:55:21.583076 1815551 main.go:141] libmachine: (addons-214022) Ensuring network mk-addons-214022 is active
	I1013 13:55:21.583880 1815551 main.go:141] libmachine: (addons-214022) getting domain XML...
	I1013 13:55:21.585201 1815551 main.go:141] libmachine: (addons-214022) DBG | starting domain XML:
	I1013 13:55:21.585220 1815551 main.go:141] libmachine: (addons-214022) DBG | <domain type='kvm'>
	I1013 13:55:21.585231 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>addons-214022</name>
	I1013 13:55:21.585241 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c368161c-2753-46d2-a9ea-3f8a7f4ac862</uuid>
	I1013 13:55:21.585272 1815551 main.go:141] libmachine: (addons-214022) DBG |   <memory unit='KiB'>4194304</memory>
	I1013 13:55:21.585285 1815551 main.go:141] libmachine: (addons-214022) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1013 13:55:21.585295 1815551 main.go:141] libmachine: (addons-214022) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 13:55:21.585304 1815551 main.go:141] libmachine: (addons-214022) DBG |   <os>
	I1013 13:55:21.585317 1815551 main.go:141] libmachine: (addons-214022) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 13:55:21.585324 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='cdrom'/>
	I1013 13:55:21.585329 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='hd'/>
	I1013 13:55:21.585345 1815551 main.go:141] libmachine: (addons-214022) DBG |     <bootmenu enable='no'/>
	I1013 13:55:21.585358 1815551 main.go:141] libmachine: (addons-214022) DBG |   </os>
	I1013 13:55:21.585369 1815551 main.go:141] libmachine: (addons-214022) DBG |   <features>
	I1013 13:55:21.585391 1815551 main.go:141] libmachine: (addons-214022) DBG |     <acpi/>
	I1013 13:55:21.585403 1815551 main.go:141] libmachine: (addons-214022) DBG |     <apic/>
	I1013 13:55:21.585411 1815551 main.go:141] libmachine: (addons-214022) DBG |     <pae/>
	I1013 13:55:21.585428 1815551 main.go:141] libmachine: (addons-214022) DBG |   </features>
	I1013 13:55:21.585439 1815551 main.go:141] libmachine: (addons-214022) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 13:55:21.585443 1815551 main.go:141] libmachine: (addons-214022) DBG |   <clock offset='utc'/>
	I1013 13:55:21.585451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 13:55:21.585456 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_reboot>restart</on_reboot>
	I1013 13:55:21.585464 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_crash>destroy</on_crash>
	I1013 13:55:21.585467 1815551 main.go:141] libmachine: (addons-214022) DBG |   <devices>
	I1013 13:55:21.585476 1815551 main.go:141] libmachine: (addons-214022) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 13:55:21.585483 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='cdrom'>
	I1013 13:55:21.585490 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw'/>
	I1013 13:55:21.585499 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.585530 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.585553 1815551 main.go:141] libmachine: (addons-214022) DBG |       <readonly/>
	I1013 13:55:21.585566 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 13:55:21.585582 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585595 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='disk'>
	I1013 13:55:21.585608 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 13:55:21.585626 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.585638 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.585652 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 13:55:21.585666 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585680 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 13:55:21.585693 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 13:55:21.585706 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585726 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 13:55:21.585741 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 13:55:21.585760 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 13:55:21.585769 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585773 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585778 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:45:c6:7b'/>
	I1013 13:55:21.585783 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='mk-addons-214022'/>
	I1013 13:55:21.585787 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585793 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 13:55:21.585797 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585801 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585806 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:54:97:7f'/>
	I1013 13:55:21.585810 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='default'/>
	I1013 13:55:21.585815 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585823 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 13:55:21.585828 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585834 1815551 main.go:141] libmachine: (addons-214022) DBG |     <serial type='pty'>
	I1013 13:55:21.585840 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='isa-serial' port='0'>
	I1013 13:55:21.585849 1815551 main.go:141] libmachine: (addons-214022) DBG |         <model name='isa-serial'/>
	I1013 13:55:21.585856 1815551 main.go:141] libmachine: (addons-214022) DBG |       </target>
	I1013 13:55:21.585860 1815551 main.go:141] libmachine: (addons-214022) DBG |     </serial>
	I1013 13:55:21.585867 1815551 main.go:141] libmachine: (addons-214022) DBG |     <console type='pty'>
	I1013 13:55:21.585871 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='serial' port='0'/>
	I1013 13:55:21.585878 1815551 main.go:141] libmachine: (addons-214022) DBG |     </console>
	I1013 13:55:21.585882 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='mouse' bus='ps2'/>
	I1013 13:55:21.585888 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 13:55:21.585895 1815551 main.go:141] libmachine: (addons-214022) DBG |     <audio id='1' type='none'/>
	I1013 13:55:21.585900 1815551 main.go:141] libmachine: (addons-214022) DBG |     <memballoon model='virtio'>
	I1013 13:55:21.585905 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 13:55:21.585912 1815551 main.go:141] libmachine: (addons-214022) DBG |     </memballoon>
	I1013 13:55:21.585920 1815551 main.go:141] libmachine: (addons-214022) DBG |     <rng model='virtio'>
	I1013 13:55:21.585937 1815551 main.go:141] libmachine: (addons-214022) DBG |       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.585942 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 13:55:21.585947 1815551 main.go:141] libmachine: (addons-214022) DBG |     </rng>
	I1013 13:55:21.585950 1815551 main.go:141] libmachine: (addons-214022) DBG |   </devices>
	I1013 13:55:21.585955 1815551 main.go:141] libmachine: (addons-214022) DBG | </domain>
	I1013 13:55:21.585958 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.998506 1815551 main.go:141] libmachine: (addons-214022) waiting for domain to start...
	I1013 13:55:21.999992 1815551 main.go:141] libmachine: (addons-214022) domain is now running
	I1013 13:55:22.000011 1815551 main.go:141] libmachine: (addons-214022) waiting for IP...
	I1013 13:55:22.000803 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.001255 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.001289 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.001544 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.001627 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.001556 1815579 retry.go:31] will retry after 233.588452ms: waiting for domain to come up
	I1013 13:55:22.236968 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.237478 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.237508 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.237876 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.237928 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.237848 1815579 retry.go:31] will retry after 300.8157ms: waiting for domain to come up
	I1013 13:55:22.540639 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.541271 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.541302 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.541621 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.541683 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.541605 1815579 retry.go:31] will retry after 377.651668ms: waiting for domain to come up
	I1013 13:55:22.921184 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.921783 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.921814 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.922148 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.922237 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.922151 1815579 retry.go:31] will retry after 510.251488ms: waiting for domain to come up
	I1013 13:55:23.433846 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:23.434356 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:23.434384 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:23.434622 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:23.434651 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:23.434592 1815579 retry.go:31] will retry after 738.765721ms: waiting for domain to come up
	I1013 13:55:24.174730 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:24.175220 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:24.175261 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:24.175609 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:24.175645 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:24.175615 1815579 retry.go:31] will retry after 941.377797ms: waiting for domain to come up
	I1013 13:55:25.118416 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.119134 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.119161 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.119505 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.119531 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.119464 1815579 retry.go:31] will retry after 715.698221ms: waiting for domain to come up
	I1013 13:55:25.837169 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.837602 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.837632 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.837919 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.837956 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.837912 1815579 retry.go:31] will retry after 1.477632519s: waiting for domain to come up
	I1013 13:55:27.317869 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:27.318416 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:27.318445 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:27.318730 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:27.318828 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:27.318742 1815579 retry.go:31] will retry after 1.752025896s: waiting for domain to come up
	I1013 13:55:29.072255 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:29.072804 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:29.072827 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:29.073152 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:29.073218 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:29.073146 1815579 retry.go:31] will retry after 1.890403935s: waiting for domain to come up
	I1013 13:55:30.965205 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:30.965861 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:30.965889 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:30.966181 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:30.966249 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:30.966169 1815579 retry.go:31] will retry after 2.015469115s: waiting for domain to come up
	I1013 13:55:32.984641 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:32.985205 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:32.985254 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:32.985538 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:32.985566 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:32.985483 1815579 retry.go:31] will retry after 3.162648802s: waiting for domain to come up
	I1013 13:55:36.149428 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150058 1815551 main.go:141] libmachine: (addons-214022) found domain IP: 192.168.39.214
	I1013 13:55:36.150084 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has current primary IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150093 1815551 main.go:141] libmachine: (addons-214022) reserving static IP address...
	I1013 13:55:36.150509 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find host DHCP lease matching {name: "addons-214022", mac: "52:54:00:45:c6:7b", ip: "192.168.39.214"} in network mk-addons-214022
	I1013 13:55:36.359631 1815551 main.go:141] libmachine: (addons-214022) DBG | Getting to WaitForSSH function...
	I1013 13:55:36.359656 1815551 main.go:141] libmachine: (addons-214022) reserved static IP address 192.168.39.214 for domain addons-214022
	I1013 13:55:36.359708 1815551 main.go:141] libmachine: (addons-214022) waiting for SSH...
	I1013 13:55:36.362970 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363545 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.363578 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363975 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH client type: external
	I1013 13:55:36.364005 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa (-rw-------)
	I1013 13:55:36.364071 1815551 main.go:141] libmachine: (addons-214022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 13:55:36.364096 1815551 main.go:141] libmachine: (addons-214022) DBG | About to run SSH command:
	I1013 13:55:36.364112 1815551 main.go:141] libmachine: (addons-214022) DBG | exit 0
	I1013 13:55:36.500938 1815551 main.go:141] libmachine: (addons-214022) DBG | SSH cmd err, output: <nil>: 
	I1013 13:55:36.501251 1815551 main.go:141] libmachine: (addons-214022) domain creation complete
	I1013 13:55:36.501689 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:36.502339 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502549 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502694 1815551 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1013 13:55:36.502705 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:55:36.504172 1815551 main.go:141] libmachine: Detecting operating system of created instance...
	I1013 13:55:36.504188 1815551 main.go:141] libmachine: Waiting for SSH to be available...
	I1013 13:55:36.504195 1815551 main.go:141] libmachine: Getting to WaitForSSH function...
	I1013 13:55:36.504201 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.507156 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507596 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.507626 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507811 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.508003 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508123 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508334 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.508503 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.508771 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.508786 1815551 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1013 13:55:36.609679 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.609706 1815551 main.go:141] libmachine: Detecting the provisioner...
	I1013 13:55:36.609725 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.612870 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613343 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.613380 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613602 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.613846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614017 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614155 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.614343 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.614556 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.614568 1815551 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1013 13:55:36.717397 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1013 13:55:36.717477 1815551 main.go:141] libmachine: found compatible host: buildroot
	I1013 13:55:36.717487 1815551 main.go:141] libmachine: Provisioning with buildroot...
	I1013 13:55:36.717495 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.717788 1815551 buildroot.go:166] provisioning hostname "addons-214022"
	I1013 13:55:36.717829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.718042 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.721497 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.721988 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.722027 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.722260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.722429 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722542 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722660 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.722864 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.723104 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.723120 1815551 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214022 && echo "addons-214022" | sudo tee /etc/hostname
	I1013 13:55:36.853529 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214022
	
	I1013 13:55:36.853563 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.856617 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857071 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.857100 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.857522 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857852 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.858072 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.858351 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.858378 1815551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214022/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 13:55:36.978438 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.978492 1815551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 13:55:36.978561 1815551 buildroot.go:174] setting up certificates
	I1013 13:55:36.978581 1815551 provision.go:84] configureAuth start
	I1013 13:55:36.978601 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.978932 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:36.982111 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982557 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.982587 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982769 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.985722 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986132 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.986153 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986337 1815551 provision.go:143] copyHostCerts
	I1013 13:55:36.986421 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 13:55:36.986610 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 13:55:36.986700 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 13:55:36.986789 1815551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.addons-214022 san=[127.0.0.1 192.168.39.214 addons-214022 localhost minikube]
	I1013 13:55:37.044634 1815551 provision.go:177] copyRemoteCerts
	I1013 13:55:37.044706 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 13:55:37.044750 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.047881 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048238 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.048268 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048531 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.048757 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.048938 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.049093 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.132357 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 13:55:37.163230 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 13:55:37.193519 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 13:55:37.228041 1815551 provision.go:87] duration metric: took 249.44117ms to configureAuth
	I1013 13:55:37.228073 1815551 buildroot.go:189] setting minikube options for container-runtime
	I1013 13:55:37.228284 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:55:37.228308 1815551 main.go:141] libmachine: Checking connection to Docker...
	I1013 13:55:37.228319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetURL
	I1013 13:55:37.229621 1815551 main.go:141] libmachine: (addons-214022) DBG | using libvirt version 8000000
	I1013 13:55:37.231977 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232573 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.232594 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232944 1815551 main.go:141] libmachine: Docker is up and running!
	I1013 13:55:37.232959 1815551 main.go:141] libmachine: Reticulating splines...
	I1013 13:55:37.232967 1815551 client.go:171] duration metric: took 16.503662992s to LocalClient.Create
	I1013 13:55:37.232989 1815551 start.go:167] duration metric: took 16.503732898s to libmachine.API.Create "addons-214022"
	I1013 13:55:37.232996 1815551 start.go:293] postStartSetup for "addons-214022" (driver="kvm2")
	I1013 13:55:37.233004 1815551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 13:55:37.233019 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.233334 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 13:55:37.233364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.236079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236495 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.236524 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.237136 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.237319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.237840 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.320344 1815551 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 13:55:37.325903 1815551 info.go:137] Remote host: Buildroot 2025.02
	I1013 13:55:37.325945 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 13:55:37.326098 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 13:55:37.326125 1815551 start.go:296] duration metric: took 93.124024ms for postStartSetup
	I1013 13:55:37.326165 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:37.326907 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.329757 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330258 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.330288 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330612 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:37.330830 1815551 start.go:128] duration metric: took 16.620261949s to createHost
	I1013 13:55:37.330855 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.334094 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334644 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.334674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334903 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.335118 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335505 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.335738 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:37.336080 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:37.336099 1815551 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 13:55:37.453534 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760363737.403582342
	
	I1013 13:55:37.453567 1815551 fix.go:216] guest clock: 1760363737.403582342
	I1013 13:55:37.453576 1815551 fix.go:229] Guest: 2025-10-13 13:55:37.403582342 +0000 UTC Remote: 2025-10-13 13:55:37.33084379 +0000 UTC m=+16.741419072 (delta=72.738552ms)
	I1013 13:55:37.453601 1815551 fix.go:200] guest clock delta is within tolerance: 72.738552ms
	I1013 13:55:37.453614 1815551 start.go:83] releasing machines lock for "addons-214022", held for 16.74313679s
	I1013 13:55:37.453644 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.453996 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.457079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457464 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.457495 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457681 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458199 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458359 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458457 1815551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 13:55:37.458521 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.458571 1815551 ssh_runner.go:195] Run: cat /version.json
	I1013 13:55:37.458594 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.461592 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462001 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462030 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462059 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462230 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.462419 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.462580 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.462613 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462638 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462750 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.462894 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.463074 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.463216 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.463355 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.568362 1815551 ssh_runner.go:195] Run: systemctl --version
	I1013 13:55:37.574961 1815551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 13:55:37.581570 1815551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 13:55:37.581652 1815551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 13:55:37.601744 1815551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 13:55:37.601771 1815551 start.go:495] detecting cgroup driver to use...
	I1013 13:55:37.601855 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 13:55:37.636399 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 13:55:37.653284 1815551 docker.go:218] disabling cri-docker service (if available) ...
	I1013 13:55:37.653349 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 13:55:37.671035 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 13:55:37.687997 1815551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 13:55:37.835046 1815551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 13:55:38.036660 1815551 docker.go:234] disabling docker service ...
	I1013 13:55:38.036785 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 13:55:38.054634 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 13:55:38.070992 1815551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 13:55:38.226219 1815551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 13:55:38.375133 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 13:55:38.391629 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 13:55:38.415622 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 13:55:38.428382 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 13:55:38.441166 1815551 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 13:55:38.441271 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 13:55:38.454185 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.467219 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 13:55:38.480016 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.493623 1815551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 13:55:38.507533 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 13:55:38.520643 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 13:55:38.533755 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 13:55:38.546971 1815551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 13:55:38.557881 1815551 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 13:55:38.557958 1815551 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 13:55:38.578224 1815551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 13:55:38.590424 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:38.732726 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:38.770576 1815551 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 13:55:38.770707 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:38.776353 1815551 retry.go:31] will retry after 1.261164496s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 13:55:40.038886 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:40.045830 1815551 start.go:563] Will wait 60s for crictl version
	I1013 13:55:40.045914 1815551 ssh_runner.go:195] Run: which crictl
	I1013 13:55:40.050941 1815551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 13:55:40.093318 1815551 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 13:55:40.093432 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.123924 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.255787 1815551 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 13:55:40.331568 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:40.334892 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335313 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:40.335337 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335632 1815551 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 13:55:40.341286 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:40.357723 1815551 kubeadm.go:883] updating cluster {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 13:55:40.357874 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:40.357947 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:40.395630 1815551 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 13:55:40.395736 1815551 ssh_runner.go:195] Run: which lz4
	I1013 13:55:40.400778 1815551 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 13:55:40.406306 1815551 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 13:55:40.406344 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (409015552 bytes)
	I1013 13:55:41.943253 1815551 containerd.go:563] duration metric: took 1.54249613s to copy over tarball
	I1013 13:55:41.943351 1815551 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 13:55:43.492564 1815551 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.549175583s)
	I1013 13:55:43.492596 1815551 containerd.go:570] duration metric: took 1.549300388s to extract the tarball
	I1013 13:55:43.492604 1815551 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 13:55:43.534655 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:43.680421 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:43.727538 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.770225 1815551 retry.go:31] will retry after 129.297012ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T13:55:43Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1013 13:55:43.900675 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.942782 1815551 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 13:55:43.942818 1815551 cache_images.go:85] Images are preloaded, skipping loading
	I1013 13:55:43.942831 1815551 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.34.1 containerd true true} ...
	I1013 13:55:43.942973 1815551 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 13:55:43.943036 1815551 ssh_runner.go:195] Run: sudo crictl info
	I1013 13:55:43.983500 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:43.983527 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:43.983547 1815551 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 13:55:43.983572 1815551 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214022 NodeName:addons-214022 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 13:55:43.983683 1815551 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-214022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.214"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 13:55:43.983786 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 13:55:43.997492 1815551 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 13:55:43.997569 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 13:55:44.009940 1815551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1013 13:55:44.032456 1815551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 13:55:44.055201 1815551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1013 13:55:44.077991 1815551 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I1013 13:55:44.082755 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:44.102001 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:44.250454 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:55:44.271759 1815551 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022 for IP: 192.168.39.214
	I1013 13:55:44.271804 1815551 certs.go:195] generating shared ca certs ...
	I1013 13:55:44.271849 1815551 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.272058 1815551 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 13:55:44.515410 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt ...
	I1013 13:55:44.515443 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt: {Name:mk7e93844bf7a5315c584d29c143e2135009c9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515626 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key ...
	I1013 13:55:44.515639 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key: {Name:mk2370dd9470838be70f5ff73870ee78eaf49615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515736 1815551 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 13:55:44.688770 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt ...
	I1013 13:55:44.688804 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt: {Name:mk17069980c2ce75e576b93cf8d09a188d68e9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.688989 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key ...
	I1013 13:55:44.689002 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key: {Name:mk6b5175fc3e29304600d26ae322daa345a1af96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.689075 1815551 certs.go:257] generating profile certs ...
	I1013 13:55:44.689137 1815551 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key
	I1013 13:55:44.689163 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt with IP's: []
	I1013 13:55:45.249037 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt ...
	I1013 13:55:45.249073 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: {Name:mk280462c7f89663f3ca7afb3f0492dd2b0ee285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249251 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key ...
	I1013 13:55:45.249263 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key: {Name:mk559b21297b9d07a442f449010608571723a06a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249350 1815551 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114
	I1013 13:55:45.249370 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I1013 13:55:45.485539 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 ...
	I1013 13:55:45.485568 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114: {Name:mkd1f4b4fe453f9f52532a7d0522a77f6292f9c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485740 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 ...
	I1013 13:55:45.485755 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114: {Name:mk7e630cb0d73800acc236df973e9041d684cea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485833 1815551 certs.go:382] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt
	I1013 13:55:45.485922 1815551 certs.go:386] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key
	I1013 13:55:45.485980 1815551 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key
	I1013 13:55:45.485998 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt with IP's: []
	I1013 13:55:45.781914 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt ...
	I1013 13:55:45.781958 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt: {Name:mk2c046b91ab288417107efe4a8ee37eb796f0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782135 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key ...
	I1013 13:55:45.782151 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key: {Name:mk11ba110c07b71583dc1e7a37e3c7830733bcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782356 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 13:55:45.782394 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 13:55:45.782417 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 13:55:45.782439 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 13:55:45.783086 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 13:55:45.815352 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 13:55:45.846541 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 13:55:45.880232 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 13:55:45.924466 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 13:55:45.962160 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 13:55:45.999510 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 13:55:46.034971 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 13:55:46.068482 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 13:55:46.099803 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 13:55:46.121270 1815551 ssh_runner.go:195] Run: openssl version
	I1013 13:55:46.128266 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 13:55:46.142449 1815551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148226 1815551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148313 1815551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.155940 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 13:55:46.170023 1815551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 13:55:46.175480 1815551 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 13:55:46.175554 1815551 kubeadm.go:400] StartCluster: {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:46.175652 1815551 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 13:55:46.175759 1815551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 13:55:46.214377 1815551 cri.go:89] found id: ""
	I1013 13:55:46.214475 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 13:55:46.227534 1815551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 13:55:46.239809 1815551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 13:55:46.253443 1815551 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 13:55:46.253466 1815551 kubeadm.go:157] found existing configuration files:
	
	I1013 13:55:46.253514 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 13:55:46.265630 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 13:55:46.265706 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 13:55:46.278450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 13:55:46.290243 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 13:55:46.290303 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 13:55:46.303207 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.315748 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 13:55:46.315819 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.328450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 13:55:46.340422 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 13:55:46.340491 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 13:55:46.353088 1815551 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 13:55:46.409861 1815551 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 13:55:46.409939 1815551 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 13:55:46.510451 1815551 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 13:55:46.510548 1815551 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 13:55:46.510736 1815551 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 13:55:46.519844 1815551 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 13:55:46.532700 1815551 out.go:252]   - Generating certificates and keys ...
	I1013 13:55:46.532819 1815551 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 13:55:46.532896 1815551 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 13:55:46.783435 1815551 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 13:55:47.020350 1815551 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 13:55:47.775782 1815551 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 13:55:48.011804 1815551 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 13:55:48.461103 1815551 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 13:55:48.461301 1815551 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.750774 1815551 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 13:55:48.751132 1815551 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.831944 1815551 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 13:55:49.085300 1815551 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 13:55:49.215416 1815551 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 13:55:49.215485 1815551 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 13:55:49.341619 1815551 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 13:55:49.552784 1815551 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 13:55:49.595942 1815551 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 13:55:49.670226 1815551 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 13:55:49.887570 1815551 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 13:55:49.888048 1815551 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 13:55:49.890217 1815551 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 13:55:49.891956 1815551 out.go:252]   - Booting up control plane ...
	I1013 13:55:49.892075 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 13:55:49.892175 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 13:55:49.892283 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 13:55:49.915573 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 13:55:49.915702 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 13:55:49.926506 1815551 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 13:55:49.926635 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 13:55:49.926699 1815551 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 13:55:50.104649 1815551 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 13:55:50.104896 1815551 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 13:55:51.105517 1815551 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001950535s
	I1013 13:55:51.110678 1815551 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 13:55:51.110781 1815551 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.214:8443/livez
	I1013 13:55:51.110862 1815551 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 13:55:51.110934 1815551 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 13:55:53.698826 1815551 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.589717518s
	I1013 13:55:54.571486 1815551 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.462849107s
	I1013 13:55:56.609645 1815551 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502421023s
	I1013 13:55:56.625086 1815551 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 13:55:56.642185 1815551 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 13:55:56.660063 1815551 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 13:55:56.660353 1815551 kubeadm.go:318] [mark-control-plane] Marking the node addons-214022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 13:55:56.677664 1815551 kubeadm.go:318] [bootstrap-token] Using token: yho7iw.8cmp1omdihpr13ia
	I1013 13:55:56.680503 1815551 out.go:252]   - Configuring RBAC rules ...
	I1013 13:55:56.680644 1815551 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 13:55:56.691921 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 13:55:56.701832 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 13:55:56.706581 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 13:55:56.711586 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 13:55:56.720960 1815551 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 13:55:57.019012 1815551 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 13:55:57.510749 1815551 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 13:55:58.017894 1815551 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 13:55:58.019641 1815551 kubeadm.go:318] 
	I1013 13:55:58.019746 1815551 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 13:55:58.019759 1815551 kubeadm.go:318] 
	I1013 13:55:58.019856 1815551 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 13:55:58.019866 1815551 kubeadm.go:318] 
	I1013 13:55:58.019906 1815551 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 13:55:58.019991 1815551 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 13:55:58.020075 1815551 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 13:55:58.020087 1815551 kubeadm.go:318] 
	I1013 13:55:58.020135 1815551 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 13:55:58.020180 1815551 kubeadm.go:318] 
	I1013 13:55:58.020272 1815551 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 13:55:58.020284 1815551 kubeadm.go:318] 
	I1013 13:55:58.020355 1815551 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 13:55:58.020465 1815551 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 13:55:58.020560 1815551 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 13:55:58.020570 1815551 kubeadm.go:318] 
	I1013 13:55:58.020696 1815551 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 13:55:58.020841 1815551 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 13:55:58.020863 1815551 kubeadm.go:318] 
	I1013 13:55:58.021022 1815551 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021178 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa \
	I1013 13:55:58.021220 1815551 kubeadm.go:318] 	--control-plane 
	I1013 13:55:58.021227 1815551 kubeadm.go:318] 
	I1013 13:55:58.021356 1815551 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 13:55:58.021366 1815551 kubeadm.go:318] 
	I1013 13:55:58.021480 1815551 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021613 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa 
	I1013 13:55:58.023899 1815551 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 13:55:58.023930 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:58.023940 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:58.026381 1815551 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 13:55:58.028311 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 13:55:58.043778 1815551 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 13:55:58.076261 1815551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 13:55:58.076355 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.076389 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214022 minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=addons-214022 minikube.k8s.io/primary=true
	I1013 13:55:58.125421 1815551 ops.go:34] apiserver oom_adj: -16
	I1013 13:55:58.249972 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.750645 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.250461 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.750623 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.250758 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.750903 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.250112 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.750238 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.250999 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.377634 1815551 kubeadm.go:1113] duration metric: took 4.301363742s to wait for elevateKubeSystemPrivileges
	I1013 13:56:02.377670 1815551 kubeadm.go:402] duration metric: took 16.202122758s to StartCluster
	I1013 13:56:02.377691 1815551 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.377852 1815551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:56:02.378374 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.378641 1815551 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:56:02.378701 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 13:56:02.378727 1815551 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 13:56:02.378856 1815551 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214022"
	I1013 13:56:02.378871 1815551 addons.go:69] Setting yakd=true in profile "addons-214022"
	I1013 13:56:02.378888 1815551 addons.go:238] Setting addon yakd=true in "addons-214022"
	I1013 13:56:02.378915 1815551 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:02.378924 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378926 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.378954 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378945 1815551 addons.go:69] Setting default-storageclass=true in profile "addons-214022"
	I1013 13:56:02.378942 1815551 addons.go:69] Setting gcp-auth=true in profile "addons-214022"
	I1013 13:56:02.378975 1815551 addons.go:69] Setting cloud-spanner=true in profile "addons-214022"
	I1013 13:56:02.378978 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214022"
	I1013 13:56:02.378963 1815551 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.378988 1815551 mustload.go:65] Loading cluster: addons-214022
	I1013 13:56:02.378999 1815551 addons.go:69] Setting registry=true in profile "addons-214022"
	I1013 13:56:02.379046 1815551 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214022"
	I1013 13:56:02.379058 1815551 addons.go:238] Setting addon registry=true in "addons-214022"
	I1013 13:56:02.379079 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379103 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379214 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.379427 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.378987 1815551 addons.go:238] Setting addon cloud-spanner=true in "addons-214022"
	I1013 13:56:02.379425 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379478 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379483 1815551 addons.go:69] Setting storage-provisioner=true in profile "addons-214022"
	I1013 13:56:02.379488 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379497 1815551 addons.go:238] Setting addon storage-provisioner=true in "addons-214022"
	I1013 13:56:02.379503 1815551 addons.go:69] Setting ingress=true in profile "addons-214022"
	I1013 13:56:02.379519 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379522 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379532 1815551 addons.go:69] Setting ingress-dns=true in profile "addons-214022"
	I1013 13:56:02.379546 1815551 addons.go:238] Setting addon ingress-dns=true in "addons-214022"
	I1013 13:56:02.379575 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379616 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379653 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379682 1815551 addons.go:69] Setting volumesnapshots=true in profile "addons-214022"
	I1013 13:56:02.379814 1815551 addons.go:238] Setting addon volumesnapshots=true in "addons-214022"
	I1013 13:56:02.379879 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379926 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379490 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379965 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379979 1815551 addons.go:69] Setting metrics-server=true in profile "addons-214022"
	I1013 13:56:02.379992 1815551 addons.go:238] Setting addon metrics-server=true in "addons-214022"
	I1013 13:56:02.380013 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379520 1815551 addons.go:238] Setting addon ingress=true in "addons-214022"
	I1013 13:56:02.379924 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380064 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380076 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380107 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380112 1815551 addons.go:69] Setting inspektor-gadget=true in profile "addons-214022"
	I1013 13:56:02.380125 1815551 addons.go:238] Setting addon inspektor-gadget=true in "addons-214022"
	I1013 13:56:02.380158 1815551 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.380221 1815551 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214022"
	I1013 13:56:02.380272 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380445 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380510 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379699 1815551 addons.go:69] Setting volcano=true in profile "addons-214022"
	I1013 13:56:02.380559 1815551 addons.go:238] Setting addon volcano=true in "addons-214022"
	I1013 13:56:02.380613 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380634 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380666 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380790 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380832 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380876 1815551 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214022"
	I1013 13:56:02.380894 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214022"
	I1013 13:56:02.379472 1815551 addons.go:69] Setting registry-creds=true in profile "addons-214022"
	I1013 13:56:02.381003 1815551 addons.go:238] Setting addon registry-creds=true in "addons-214022"
	I1013 13:56:02.381112 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.381265 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.381293 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.381341 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.382020 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.382057 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.382817 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.383259 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.383291 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384195 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384256 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384286 1815551 out.go:179] * Verifying Kubernetes components...
	I1013 13:56:02.384291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.384732 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384782 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.387093 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:56:02.392106 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.392163 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.396083 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.396162 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.410131 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I1013 13:56:02.411431 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1013 13:56:02.412218 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.412918 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.412942 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.413748 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.414498 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.415229 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.415286 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.415822 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.415843 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.420030 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I1013 13:56:02.420041 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I1013 13:56:02.420259 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I1013 13:56:02.420298 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I1013 13:56:02.420346 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.420406 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I1013 13:56:02.420930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421041 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421071 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.421170 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421581 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421600 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421753 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421769 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421819 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421832 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.422190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422264 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422931 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.422976 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.423789 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.424161 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.424211 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.427224 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I1013 13:56:02.427390 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38041
	I1013 13:56:02.427782 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.427837 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428131 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.428460 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.428533 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.428569 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428840 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.429601 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429621 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.429774 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429786 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.430349 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430508 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.430777 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430880 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431609 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.431937 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.431967 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431989 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.432062 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432169 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432395 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.432603 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.432771 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.433653 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.433706 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.433998 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.434042 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.434547 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32821
	I1013 13:56:02.441970 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1013 13:56:02.442071 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I1013 13:56:02.442458 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.442810 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.443536 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443557 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.443796 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443813 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.444423 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.444487 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.445199 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.445303 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.445921 1815551 addons.go:238] Setting addon default-storageclass=true in "addons-214022"
	I1013 13:56:02.445974 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.446387 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.446430 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.447853 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1013 13:56:02.447930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448413 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448652 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.448673 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449317 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.449355 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449911 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450071 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450759 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.450802 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.452824 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1013 13:56:02.453268 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.453309 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.453388 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.453909 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.453944 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.454377 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.454945 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.455002 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.457726 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41047
	I1013 13:56:02.458946 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42105
	I1013 13:56:02.459841 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.460456 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.460471 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.460997 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.461059 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.461190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.461893 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.462087 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.463029 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I1013 13:56:02.463622 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.464283 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.464301 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.467881 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.468766 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1013 13:56:02.468880 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.470158 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.470767 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.470785 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.471160 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
	I1013 13:56:02.471380 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.471463 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.471745 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.472514 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I1013 13:56:02.474011 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.474407 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.475349 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.475371 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.475936 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.477228 1815551 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214022"
	I1013 13:56:02.477291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.477704 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.477781 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.478540 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.478577 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.479296 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.479320 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.479338 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 13:56:02.479831 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.481287 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.482030 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.482191 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 13:56:02.482988 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I1013 13:56:02.482206 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.483218 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.483796 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.484400 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.484415 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.485053 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485131 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485219 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 13:56:02.485513 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.485624 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.488111 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 13:56:02.489703 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 13:56:02.490084 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1013 13:56:02.490663 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.490763 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.491660 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I1013 13:56:02.491817 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492275 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.492498 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.492417 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.492699 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492943 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 13:56:02.493252 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.493468 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.493280 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1013 13:56:02.494093 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.494695 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.495079 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.495408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.497771 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 13:56:02.498011 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.499118 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 13:56:02.499863 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35551
	I1013 13:56:02.500453 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.500464 1815551 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 13:56:02.500482 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.501046 1815551 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:02.501426 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 13:56:02.501453 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502344 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 13:56:02.502360 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 13:56:02.502380 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502511 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:02.502523 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 13:56:02.502539 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502551 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.503704 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 13:56:02.504519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.504549 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.504971 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1013 13:56:02.505044 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34753
	I1013 13:56:02.505476 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.505935 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.506132 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.506402 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 13:56:02.506420 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 13:56:02.506441 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.507553 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.507571 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.510588 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1013 13:56:02.511014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.512055 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.513064 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I1013 13:56:02.513461 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I1013 13:56:02.513806 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I1013 13:56:02.514065 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514237 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46643
	I1013 13:56:02.514353 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514506 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.514833 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.515238 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.515280 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.515776 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.516060 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516139 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516152 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516158 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516229 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I1013 13:56:02.516543 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.516614 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.516690 1815551 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 13:56:02.517007 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.517014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517062 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517467 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.517483 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.517559 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.517562 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
	I1013 13:56:02.518311 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:02.518369 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 13:56:02.518393 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.518516 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.518540 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.518655 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519402 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519519 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.519628 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.519763 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.519831 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521182 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.521199 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1013 13:56:02.521204 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521239 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521254 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.521455 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521645 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.521859 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.522155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.522227 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.525058 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.526886 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.526989 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.527062 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.527172 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.527481 1815551 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:02.527499 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1013 13:56:02.527538 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.527916 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528591 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.530285 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530450 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528734 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530629 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.530633 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528801 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528997 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529220 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I1013 13:56:02.529385 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529699 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.530894 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.530917 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.531013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.529988 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.531257 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531828 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.532069 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.532264 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.532540 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.532554 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531749 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.533563 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 13:56:02.533622 1815551 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 13:56:02.533679 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535465 1815551 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 13:56:02.533809 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I1013 13:56:02.533885 1815551 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 13:56:02.533999 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.534123 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.534155 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535733 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.535024 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.536159 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.536202 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.536302 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.537059 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.537168 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537279 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I1013 13:56:02.537305 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 13:56:02.537322 1815551 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 13:56:02.537342 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.537456 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.537805 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537934 1815551 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:02.537945 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 13:56:02.537970 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538046 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 13:56:02.538056 1815551 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 13:56:02.538070 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538169 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.538186 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.538982 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:02.539022 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 13:56:02.539053 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.540639 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.541678 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.541498 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.541528 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.542401 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.542692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.541543 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.542639 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.542646 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.542566 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.543500 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.544260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.545374 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.545773 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.546359 1815551 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 13:56:02.546363 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 13:56:02.546634 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.546830 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I1013 13:56:02.547953 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.547975 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.548147 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.548267 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.548438 1815551 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:02.548451 1815551 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 13:56:02.548473 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548649 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 13:56:02.548665 1815551 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 13:56:02.548684 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548741 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.548751 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.548789 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 13:56:02.549764 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549774 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.549766 1815551 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 13:56:02.549808 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.549138 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549891 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549914 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549939 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.550519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.550541 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.550650 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551438 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551458 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.551469 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.551478 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551613 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551695 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.551911 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.551979 1815551 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 13:56:02.552033 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552921 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.552947 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.552922 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.552965 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.553027 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553037 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.553282 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.553338 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553396 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.553415 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553448 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553810 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.554101 1815551 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:02.554108 1815551 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 13:56:02.554116 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 13:56:02.554188 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.555708 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:02.555861 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 13:56:02.555886 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555860 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.555999 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.556383 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.556783 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.557013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.557193 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.558058 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.558134 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.559028 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.559068 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.559315 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.559492 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.559902 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.560012 1815551 out.go:179]   - Using image docker.io/busybox:stable
	I1013 13:56:02.560174 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.560282 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560454 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560952 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561186 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561489 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561738 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561760 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561891 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561942 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562049 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562133 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562208 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562304 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.562325 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.562663 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.562854 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.563028 1815551 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 13:56:02.563073 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.563249 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.564627 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:02.564650 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 13:56:02.564672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.568502 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569018 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.569056 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569235 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.569424 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.569582 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.569725 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:03.342481 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:56:03.342511 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 13:56:03.415927 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:03.502503 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:03.509312 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:03.553702 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 13:56:03.553739 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 13:56:03.554436 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 13:56:03.554458 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 13:56:03.558285 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 13:56:03.558305 1815551 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 13:56:03.648494 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:03.699103 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:03.779563 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:03.812678 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 13:56:03.812733 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 13:56:03.829504 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:03.832700 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:03.897242 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 13:56:03.897268 1815551 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 13:56:03.905550 1815551 node_ready.go:35] waiting up to 6m0s for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909125 1815551 node_ready.go:49] node "addons-214022" is "Ready"
	I1013 13:56:03.909165 1815551 node_ready.go:38] duration metric: took 3.564505ms for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909180 1815551 api_server.go:52] waiting for apiserver process to appear ...
	I1013 13:56:03.909241 1815551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 13:56:03.957336 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:04.136232 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:04.201240 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 13:56:04.201271 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 13:56:04.228704 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 13:56:04.228758 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 13:56:04.287683 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.287738 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 13:56:04.507887 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:04.507919 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 13:56:04.641317 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 13:56:04.641349 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 13:56:04.710332 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 13:56:04.710378 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 13:56:04.712723 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 13:56:04.712755 1815551 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 13:56:04.822157 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.887676 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:04.887707 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 13:56:04.968928 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:05.069666 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 13:56:05.069709 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 13:56:05.164254 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 13:56:05.164289 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 13:56:05.171441 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 13:56:05.171470 1815551 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 13:56:05.278956 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:05.595927 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 13:56:05.595960 1815551 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 13:56:05.703182 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 13:56:05.703221 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 13:56:05.763510 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:05.763544 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 13:56:06.065261 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:06.086528 1815551 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.086558 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 13:56:06.241763 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 13:56:06.241791 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 13:56:06.468347 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.948294 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 13:56:06.948335 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 13:56:07.247516 1815551 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.904962804s)
	I1013 13:56:07.247565 1815551 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1013 13:56:07.247597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.83162272s)
	I1013 13:56:07.247662 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.247685 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248180 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248198 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.248211 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.248221 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248546 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:07.248628 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248648 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.509546 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 13:56:07.509581 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 13:56:07.797697 1815551 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214022" context rescaled to 1 replicas
	I1013 13:56:08.114046 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 13:56:08.114078 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 13:56:08.819818 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:08.819848 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 13:56:08.894448 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:09.954565 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 13:56:09.954611 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:09.959281 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.959849 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:09.959886 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.960116 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:09.960364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:09.960569 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:09.960746 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:10.901573 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 13:56:11.367882 1815551 addons.go:238] Setting addon gcp-auth=true in "addons-214022"
	I1013 13:56:11.367958 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:11.368474 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.368530 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.384151 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I1013 13:56:11.384793 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.385376 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.385403 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.385815 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.386578 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.386622 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.401901 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I1013 13:56:11.402499 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.403178 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.403201 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.403629 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.403840 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:11.405902 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:11.406201 1815551 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 13:56:11.406233 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:11.409331 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409779 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:11.409810 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409983 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:11.410205 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:11.410408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:11.410637 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:13.559421 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.0568709s)
	I1013 13:56:13.559481 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559478 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (10.050128857s)
	I1013 13:56:13.559507 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.910967928s)
	I1013 13:56:13.559530 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559544 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559553 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.860416384s)
	I1013 13:56:13.559562 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559571 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559579 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559619 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.780022659s)
	I1013 13:56:13.559648 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559663 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559691 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.726948092s)
	I1013 13:56:13.559546 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559707 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559728 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559764 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.730231108s)
	I1013 13:56:13.559493 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559784 1815551 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (9.650528788s)
	I1013 13:56:13.559801 1815551 api_server.go:72] duration metric: took 11.181129031s to wait for apiserver process to appear ...
	I1013 13:56:13.559808 1815551 api_server.go:88] waiting for apiserver healthz status ...
	I1013 13:56:13.559830 1815551 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I1013 13:56:13.559992 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560020 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560048 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560055 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560063 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560071 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560072 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560083 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560090 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560098 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559785 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560331 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560332 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560338 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560345 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560391 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560394 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560400 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560407 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560410 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560412 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560425 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560447 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560450 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560456 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560461 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560464 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560467 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560491 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560508 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560613 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560624 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560903 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560967 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560976 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560987 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560995 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.561056 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561078 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561085 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561188 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561210 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561237 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561243 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561445 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561462 1815551 addons.go:479] Verifying addon ingress=true in "addons-214022"
	I1013 13:56:13.561689 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561732 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561739 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563431 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.563516 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563493 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.564138 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.564155 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.564164 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.566500 1815551 out.go:179] * Verifying ingress addon...
	I1013 13:56:13.568872 1815551 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 13:56:13.679959 1815551 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I1013 13:56:13.701133 1815551 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 13:56:13.701173 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:13.713292 1815551 api_server.go:141] control plane version: v1.34.1
	I1013 13:56:13.713342 1815551 api_server.go:131] duration metric: took 153.525188ms to wait for apiserver health ...
	I1013 13:56:13.713357 1815551 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 13:56:13.839550 1815551 system_pods.go:59] 15 kube-system pods found
	I1013 13:56:13.839596 1815551 system_pods.go:61] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:13.839608 1815551 system_pods.go:61] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839614 1815551 system_pods.go:61] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839621 1815551 system_pods.go:61] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:13.839626 1815551 system_pods.go:61] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:13.839631 1815551 system_pods.go:61] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:13.839643 1815551 system_pods.go:61] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:13.839649 1815551 system_pods.go:61] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:13.839655 1815551 system_pods.go:61] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:13.839662 1815551 system_pods.go:61] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:13.839676 1815551 system_pods.go:61] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:13.839684 1815551 system_pods.go:61] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:13.839690 1815551 system_pods.go:61] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:13.839698 1815551 system_pods.go:61] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:13.839701 1815551 system_pods.go:61] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:13.839708 1815551 system_pods.go:74] duration metric: took 126.345191ms to wait for pod list to return data ...
	I1013 13:56:13.839738 1815551 default_sa.go:34] waiting for default service account to be created ...
	I1013 13:56:13.942067 1815551 default_sa.go:45] found service account: "default"
	I1013 13:56:13.942106 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.942111 1815551 default_sa.go:55] duration metric: took 102.363552ms for default service account to be created ...
	I1013 13:56:13.942129 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.942130 1815551 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 13:56:13.942465 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.942473 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.942485 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:14.047220 1815551 system_pods.go:86] 15 kube-system pods found
	I1013 13:56:14.047259 1815551 system_pods.go:89] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:14.047272 1815551 system_pods.go:89] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047280 1815551 system_pods.go:89] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047291 1815551 system_pods.go:89] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:14.047297 1815551 system_pods.go:89] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:14.047303 1815551 system_pods.go:89] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:14.047311 1815551 system_pods.go:89] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:14.047316 1815551 system_pods.go:89] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:14.047323 1815551 system_pods.go:89] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:14.047333 1815551 system_pods.go:89] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:14.047343 1815551 system_pods.go:89] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:14.047360 1815551 system_pods.go:89] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:14.047368 1815551 system_pods.go:89] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:14.047377 1815551 system_pods.go:89] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:14.047386 1815551 system_pods.go:89] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:14.047403 1815551 system_pods.go:126] duration metric: took 105.264628ms to wait for k8s-apps to be running ...
	I1013 13:56:14.047417 1815551 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 13:56:14.047478 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 13:56:14.113581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:14.930679 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.130040 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.620233 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.296801 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.658297 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.084581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.640914 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.131818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.760793 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.821597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.86421149s)
	I1013 13:56:18.821631 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.685366971s)
	I1013 13:56:18.821668 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821748 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821787 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821872 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.9996555s)
	W1013 13:56:18.821914 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821934 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.852967871s)
	I1013 13:56:18.821959 1815551 retry.go:31] will retry after 212.802499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821975 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821989 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822111 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.543120613s)
	I1013 13:56:18.822130 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822146 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822157 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822250 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822256 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822259 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822273 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822291 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822289 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822274 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.756980139s)
	I1013 13:56:18.822314 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822260 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822299 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822334 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822345 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822325 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822357 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822331 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822386 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822394 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.354009404s)
	W1013 13:56:18.822426 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822447 1815551 retry.go:31] will retry after 341.080561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822631 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822646 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822660 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822666 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822674 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822684 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822691 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822702 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822726 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822801 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822818 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822890 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.928381136s)
	I1013 13:56:18.822936 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822947 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823037 1815551 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.416805726s)
	I1013 13:56:18.822701 1815551 addons.go:479] Verifying addon registry=true in "addons-214022"
	I1013 13:56:18.823408 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823442 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823449 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823457 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.823463 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823529 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823549 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823554 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823563 1815551 addons.go:479] Verifying addon metrics-server=true in "addons-214022"
	I1013 13:56:18.823922 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823939 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823978 1815551 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.776478568s)
	I1013 13:56:18.826440 1815551 system_svc.go:56] duration metric: took 4.779015598s WaitForService to wait for kubelet
	I1013 13:56:18.826457 1815551 kubeadm.go:586] duration metric: took 16.447782815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:56:18.826480 1815551 node_conditions.go:102] verifying NodePressure condition ...
	I1013 13:56:18.824018 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.824271 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.826526 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.826549 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.826556 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.826909 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:18.827041 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.827056 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.827324 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.827349 1815551 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:18.827631 1815551 out.go:179] * Verifying registry addon...
	I1013 13:56:18.827639 1815551 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214022 service yakd-dashboard -n yakd-dashboard
	
	I1013 13:56:18.828579 1815551 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 13:56:18.830389 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 13:56:18.830649 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 13:56:18.831072 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 13:56:18.831622 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 13:56:18.831641 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 13:56:18.904373 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 13:56:18.904404 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 13:56:18.958203 1815551 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 13:56:18.958240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:18.968879 1815551 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 13:56:18.968905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:18.980574 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:18.980605 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 13:56:18.989659 1815551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 13:56:18.989692 1815551 node_conditions.go:123] node cpu capacity is 2
	I1013 13:56:18.989704 1815551 node_conditions.go:105] duration metric: took 163.213438ms to run NodePressure ...
	I1013 13:56:18.989726 1815551 start.go:241] waiting for startup goroutines ...
	I1013 13:56:19.035462 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:19.044517 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:19.044541 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:19.044887 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:19.044920 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:19.044937 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:19.076791 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:19.115345 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.164325 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:19.492227 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.492514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:19.578775 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.860209 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.860435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.075338 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.338880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.339590 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.591872 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.839272 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.840410 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.147212 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.341334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:21.342792 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.576751 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.816476 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.780960002s)
	W1013 13:56:21.816548 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816583 1815551 retry.go:31] will retry after 241.635364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816594 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.739753765s)
	I1013 13:56:21.816659 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.816682 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.652313132s)
	I1013 13:56:21.816724 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816742 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817049 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817064 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817072 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817094 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817135 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817206 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817222 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817231 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817240 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817331 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817362 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817373 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817637 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.820100 1815551 addons.go:479] Verifying addon gcp-auth=true in "addons-214022"
	I1013 13:56:21.822251 1815551 out.go:179] * Verifying gcp-auth addon...
	I1013 13:56:21.824621 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 13:56:21.835001 1815551 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 13:56:21.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:21.838795 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.840850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.059249 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:22.077627 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.330307 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.336339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.337042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:22.574406 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.832108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.838566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.838826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 13:56:22.914754 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:22.914802 1815551 retry.go:31] will retry after 760.892054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:23.073359 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.329443 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.336062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:23.336518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.576107 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.676911 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:23.852063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.852111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.852394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.075386 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:24.331600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.340818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:24.343374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.572818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:24.620054 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.620094 1815551 retry.go:31] will retry after 1.157322101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.831852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.836023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.836880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.073842 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.328390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.335179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:25.337258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.650194 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.777621 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:25.840280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.846148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.847000 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.073966 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:26.329927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.335473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.335806 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.575967 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:26.717807 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.717838 1815551 retry.go:31] will retry after 1.353453559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.828801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.834019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.836503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.073185 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.329339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.337730 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.338165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.576514 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.828768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.835828 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.836163 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.071440 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:28.372264 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.372321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.373313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:28.374357 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.576799 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.830178 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.839906 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.841861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 13:56:29.026067 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.026119 1815551 retry.go:31] will retry after 2.314368666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.075636 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.331372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.334421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:29.336311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.574567 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.828489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.836190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.836214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.073854 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.328358 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.337153 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:30.572800 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.829360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.836930 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.838278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.115447 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.341310 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:31.386485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.389205 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:31.390131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.594587 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.838151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.859495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.859525 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.074372 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.329175 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.337700 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.340721 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.450731 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109365647s)
	W1013 13:56:32.450775 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.450795 1815551 retry.go:31] will retry after 3.150290355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.578006 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.830600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.835361 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.837984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.072132 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.330611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.336957 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.338768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:33.576304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.832311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.837282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.839687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.073260 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.328435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.335455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.338454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:34.573208 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.829194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.836540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.838519 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.073549 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.329626 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:35.336677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.573553 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.601692 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:35.833491 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.847288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.853015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.073279 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.332575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.339486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.345783 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.575174 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.831613 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.838390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.839346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.873620 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.271867515s)
	W1013 13:56:36.873678 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:36.873707 1815551 retry.go:31] will retry after 2.895058592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:37.073691 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.328849 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.335191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.337850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:37.572952 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.830399 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.834346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.835091 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.074246 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.329068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.334746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:38.336761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.574900 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.829389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.836693 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.838345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.073278 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.329302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.339598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.340006 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:39.572295 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.769464 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:39.829653 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.836342 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.836508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.073770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.329739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.334329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.336269 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.691416 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.831148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.837541 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.839843 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.983908 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.214399822s)
	W1013 13:56:40.983958 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:40.983985 1815551 retry.go:31] will retry after 7.225185704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:41.073163 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.329997 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.335409 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.338433 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:41.666422 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.829493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.835176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.835834 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.072985 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.330254 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.339275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.343430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.574234 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.831039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.835619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.838197 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.072757 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.328191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.337547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.337556 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.573563 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.840684 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.842458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.848748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.073791 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.328352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.335902 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.337655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:44.575764 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.834421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.839189 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.844388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.073743 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.328774 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.336100 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:45.336438 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.601555 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.830165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.835830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.838487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.074421 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.328961 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.334499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.335387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:46.574665 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.829543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.835535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.837472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.076871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.328763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.335050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:47.337454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.572647 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.829879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.834618 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.837273 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.082833 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.210068 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:48.329748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.336813 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.339418 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.577288 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.957818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.960308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.964374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.076388 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.310522 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.100404712s)
	W1013 13:56:49.310569 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.310590 1815551 retry.go:31] will retry after 8.278511579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.333318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.335452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.338043 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.577394 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.830452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.835251 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.837381 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.073417 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.329558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:50.339077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.574733 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.830760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.835530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.077542 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.331547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.335448 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:51.336576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.572984 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.829083 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.837328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.072950 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.329542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.335485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.335539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.572971 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.828509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.836901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.837310 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.074048 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.333265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.335372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.336434 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.574864 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.830933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.838072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.839851 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.074866 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.338983 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.339799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:54.344377 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.574702 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.828114 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.837122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.074420 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:55.329544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:55.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.336305 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:55.578331 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.005987 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.006040 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.008625 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.083827 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.328560 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.335079 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.335136 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.575579 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.830373 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.835033 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.835179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.087195 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.332845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.337372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.338029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.576538 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.589639 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:57.830334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.836937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.838662 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.112247 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.336059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.348974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.350146 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.573280 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.842857 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.842873 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.842888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.924998 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.335308989s)
	W1013 13:56:58.925066 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:58.925097 1815551 retry.go:31] will retry after 13.924020767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:59.072616 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.329181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.335127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.335993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:59.575343 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.830551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.836400 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.837278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.078387 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.333707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.375230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:00.376823 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.572444 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.829334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.835575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.835799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.079304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.330385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.335250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.581487 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.837221 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.837449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.078263 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:02.330056 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:02.339092 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.339093 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:02.577091 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.077029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.077446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.077527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.154987 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.328809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.335973 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.336466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.574053 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.832304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.836898 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.837250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.072871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.329704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.335648 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:04.573740 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.828297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.838545 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.839359 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.073273 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.331167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.337263 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:05.339875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.572747 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.831331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.842003 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.930357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.076706 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.328910 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.336063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.343356 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:06.584114 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.830148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.835936 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.837800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.073829 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.332895 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.335938 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:07.336485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.573658 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.829535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.834609 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.841665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.077534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.328984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.333490 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.335036 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.574315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.830309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.838864 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.075894 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.330037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.335138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.336913 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:09.572525 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.828315 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.835125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.835169 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.074415 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.330449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.334152 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.338372 1815551 kapi.go:107] duration metric: took 51.507291615s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 13:57:10.573600 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.829312 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.834624 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.073690 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.329540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.334164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.575859 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.829406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.834682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.073929 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.328430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.335019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.574762 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.828887 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.833318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.849353 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:13.075935 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:13.329099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.336236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:13.573534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:57:13.587679 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.587745 1815551 retry.go:31] will retry after 13.672716628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.828261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.835435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.073229 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.328789 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.334388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.573428 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.829403 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.834752 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.074458 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.330167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.334526 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.573869 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.828247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.834508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.073598 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.329584 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.335058 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.573770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.834668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.073034 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.330112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.334151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.572834 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.827923 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.834428 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.074227 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.332800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.338122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.574366 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.829944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.835390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.073063 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.330933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.334816 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.578792 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.829059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.834174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.073867 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.328553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.335769 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.577315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.828820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.834111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.074340 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.348186 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.348277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.577133 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.828486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.835130 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.074094 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.329573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.336976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.576302 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.829112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.073276 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.332360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.574812 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.828888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.836976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.073895 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:24.329298 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.345232 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.573291 1815551 kapi.go:107] duration metric: took 1m11.00441945s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 13:57:24.829727 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.328687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.335809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.830863 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.833805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.829658 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.834781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.261314 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:27.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.335935 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.840969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.841226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.331295 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.336284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.567555 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.306188084s)
	W1013 13:57:28.567634 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:28.567738 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.567757 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568060 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568121 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568134 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:57:28.568150 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.568163 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568426 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568464 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568475 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	W1013 13:57:28.568614 1815551 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 13:57:28.828678 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.834833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.329605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:29.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.829667 1815551 kapi.go:107] duration metric: took 1m8.005042215s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 13:57:29.831603 1815551 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214022 cluster.
	I1013 13:57:29.832969 1815551 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 13:57:29.834368 1815551 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 13:57:29.835165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.335102 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.834820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.337927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.836162 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.334652 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.335329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.836940 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.335265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.835299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.334493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.835958 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.336901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.836037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.334865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.335331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.835376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.334760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.835451 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.335213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.835487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.334559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.835709 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.336510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.835078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.334427 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.835800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.836213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.835870 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.336474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.835258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.335636 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.335125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.835336 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.334300 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.834511 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.334734 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.336483 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.835357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.334098 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.834039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.336018 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.836261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.334061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.834919 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.334649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.835154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.336354 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.834937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.335025 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.835808 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.835220 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.335287 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.336327 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.836514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.835391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.335754 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.834954 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.337125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.836950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.335741 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.835238 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.334514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.836800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.335199 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.834223 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.334374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.834313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.335017 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.836739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.836138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.837760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.335601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.834423 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.335277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.334190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.835779 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.335566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.834803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.335076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.834352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.337145 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.836318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.335627 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.834879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.335150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.834450 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.335022 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.836226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.335160 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.836271 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.335103 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.335568 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.836839 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.335318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.836164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.334826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.835127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.336865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.836135 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.336673 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.835150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.334589 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.834578 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.335334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.835296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.335639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.836101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.334964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.835761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.335325 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.836391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.335041 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.836020 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.335603 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.834446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.336822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.835728 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.834134 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.335154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.836561 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.336212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.835791 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.335558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.835276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.335841 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.836019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.334744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.834701 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.335446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.835594 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.337105 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.834479 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.335535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.835194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.335256 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.336078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.835454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.335291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.835631 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.336375 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.835517 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.335533 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.835668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.334675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.836765 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.335738 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.835614 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.334992 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.834761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.835039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.335024 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.835393 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.335510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.834835 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.335247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.835193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.337646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.334671 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.835950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.335072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.835262 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.336068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.838250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.336473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.834196 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.835516 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.336890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.336117 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.835027 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.336076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.835382 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.334500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.835763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.335780 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.834829 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.335922 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.835807 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.335268 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.835042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.334861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.835742 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.835542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.336308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.834819 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.334458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.834430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.334302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.834698 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.335242 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.837355 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.334901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.835822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.335481 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.835077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.335379 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.835858 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.335030 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.334406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.835970 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.336845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.835639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.334566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.834610 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.335758 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.834181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.335230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.836521 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.335115 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.834296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.334011 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.835572 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.334655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.837467 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.334547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.835937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.335478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.334801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.834872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.834089 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.334927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.835775 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.334557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.834110 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.336120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.835608 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.338054 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.835852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.335214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.835500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.334478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.835206 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.335016 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.835509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.334080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.835482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.336619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.835408 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.334489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.834778 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.334764 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.836472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.834969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.335466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.336616 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.835557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.335389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.837280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.335407 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.835989 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.334416 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.336883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.835437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.334771 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.836376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.334601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.334699 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.834770 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.334874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.835696 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.335335 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.836061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.334551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.836309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.335167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.835702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.334763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.335505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.835798 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.335506 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.836329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.335321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.834801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.334908 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.835943 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.335962 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.836396 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.335654 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.835633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.335803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.835579 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.334633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.335151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.835600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.336050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.835564 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.335649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.335190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.834455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.334544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.835370 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.834672 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.334781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.834666 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.835748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.335284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.835158 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.337417 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.835644 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.335243 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.835634 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.335832 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.836076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.336097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.835499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.334133 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.334598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.835174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.335615 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.835346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.334875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.835362 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.335392 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.835890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.336384 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.835565 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.334702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.836069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.335345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.835340 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.338240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.836180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.336383 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.835503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.836328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.333988 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.335216 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.836465 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.836108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.336180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.836086 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.335099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.836475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.334621 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.334707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.835907 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.336386 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.834665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.334390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.333981 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.836628 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.335276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.334588 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.835824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.338905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.836639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.335704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.835552 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.334682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.335635 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.835001 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.334830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.834874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.336549 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.838494 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.335810 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.834944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.335374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.834675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.335833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.836291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.334291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.835818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.335302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.836497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.334553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.834695 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.335580 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.336475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.834974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.335889 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.835181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.336380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.336442 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.834531 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.335397 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.834456 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.337231 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.834677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.335412 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.336539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.835527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.835688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.335233 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.335877 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.836559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.335297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.837219 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.336121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.834649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.336482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.335108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.335574 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.335903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.835661 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.337729 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.835944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.834840 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.336497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.835735 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.336414 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.835122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.335039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.835080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.336069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.335177 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.835351 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.335126 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.835180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.835406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.334198 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.336224 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.836107 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.336440 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.336101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.835094 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.334705 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.836586 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.335865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.336836 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.836236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.334530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.836132 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.834953 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.336330 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.834343 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.334470 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.835865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.336394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.834746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.336193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.835282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.334495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.835755 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.835573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.335010 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.835070 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.337081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.836917 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.336075 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.836303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.335543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.336304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.835123 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.334303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.836073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.337121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.335474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.835147 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.834679 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.334975 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.835505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.335547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.834320 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.337072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.835338 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.334677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.835088 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.334605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.834688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.835956 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.336504 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.836995 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.335212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.834385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.335476 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.835502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.836012 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.335744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.834380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.335240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.835337 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.335893 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.834524 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.334081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.835413 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.334814 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.834505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.335015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.835005 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.336275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.835387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.335267 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.835234 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.335689 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.336968 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.835611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.835927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.337411 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.834441 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.335062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.835993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.336191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.831884 1815551 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1013 14:02:18.831927 1815551 kapi.go:107] duration metric: took 6m0.001279478s to wait for kubernetes.io/minikube-addons=registry ...
	W1013 14:02:18.832048 1815551 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1013 14:02:18.834028 1815551 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, default-storageclass, volcano, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, csi-hostpath-driver, ingress, gcp-auth
	I1013 14:02:18.835547 1815551 addons.go:514] duration metric: took 6m16.456841938s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin default-storageclass volcano metrics-server yakd storage-provisioner-rancher volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I1013 14:02:18.835619 1815551 start.go:246] waiting for cluster config update ...
	I1013 14:02:18.835653 1815551 start.go:255] writing updated cluster config ...
	I1013 14:02:18.835985 1815551 ssh_runner.go:195] Run: rm -f paused
	I1013 14:02:18.844672 1815551 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:18.850989 1815551 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.858822 1815551 pod_ready.go:94] pod "coredns-66bc5c9577-h4thg" is "Ready"
	I1013 14:02:18.858851 1815551 pod_ready.go:86] duration metric: took 7.830127ms for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.861510 1815551 pod_ready.go:83] waiting for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.866947 1815551 pod_ready.go:94] pod "etcd-addons-214022" is "Ready"
	I1013 14:02:18.866978 1815551 pod_ready.go:86] duration metric: took 5.438269ms for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.870108 1815551 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.876071 1815551 pod_ready.go:94] pod "kube-apiserver-addons-214022" is "Ready"
	I1013 14:02:18.876101 1815551 pod_ready.go:86] duration metric: took 5.952573ms for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.879444 1815551 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.250700 1815551 pod_ready.go:94] pod "kube-controller-manager-addons-214022" is "Ready"
	I1013 14:02:19.250743 1815551 pod_ready.go:86] duration metric: took 371.273475ms for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.452146 1815551 pod_ready.go:83] waiting for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.850363 1815551 pod_ready.go:94] pod "kube-proxy-m9kg9" is "Ready"
	I1013 14:02:19.850396 1815551 pod_ready.go:86] duration metric: took 398.220518ms for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.050567 1815551 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449725 1815551 pod_ready.go:94] pod "kube-scheduler-addons-214022" is "Ready"
	I1013 14:02:20.449765 1815551 pod_ready.go:86] duration metric: took 399.169231ms for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449779 1815551 pod_ready.go:40] duration metric: took 1.605053066s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:20.499765 1815551 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 14:02:20.501422 1815551 out.go:179] * Done! kubectl is now configured to use "addons-214022" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	9abbc433e914f       7a12f2aed60be       10 minutes ago      Running             gcp-auth                                 0                   1ec08eac71dbb       gcp-auth-78565c9fb4-bk2g7
	d6a3c830fdead       1bec18b3728e7       10 minutes ago      Running             controller                               0                   b82d6ab22225e       ingress-nginx-controller-9cc49f96f-7jf8g
	dc9eac6946abb       738351fd438f0       11 minutes ago      Running             csi-snapshotter                          0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	caf59fa52cf6c       931dbfd16f87c       11 minutes ago      Running             csi-provisioner                          0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	dcdb3cedeedc5       e899260153aed       11 minutes ago      Running             liveness-probe                           0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	20320037960be       e255e073c508c       11 minutes ago      Running             hostpath                                 0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	251c9387cb3f1       88ef14a257f42       11 minutes ago      Running             node-driver-registrar                    0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	4bf53d30ff2bf       19a639eda60f0       11 minutes ago      Running             csi-resizer                              0                   38173b2da332e       csi-hostpath-resizer-0
	da92c998f6d36       a1ed5895ba635       11 minutes ago      Running             csi-external-health-monitor-controller   0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	fdb740423cae7       aa61ee9c70bc4       11 minutes ago      Running             volume-snapshot-controller               0                   d87f7092f76cb       snapshot-controller-7d9fbc56b8-fcqg8
	d9300160a8179       59cbb42146a37       11 minutes ago      Running             csi-attacher                             0                   1571308a93146       csi-hostpath-attacher-0
	59dcea13b91a7       aa61ee9c70bc4       11 minutes ago      Running             volume-snapshot-controller               0                   fc7a88bf2bbfa       snapshot-controller-7d9fbc56b8-pnqwn
	ac9ca79606b04       8c217da6734db       11 minutes ago      Exited              patch                                    0                   82e54969531ac       ingress-nginx-admission-patch-kvlpb
	fc2247488ceef       8c217da6734db       11 minutes ago      Exited              create                                   0                   249a7d7c465c4       ingress-nginx-admission-create-rn6ng
	ade8e5a3e89a5       38dca7434d5f2       11 minutes ago      Running             gadget                                   0                   cd47cb2e122c6       gadget-lrthv
	427e1841635f7       e16d1e3a10667       11 minutes ago      Running             local-path-provisioner                   0                   b07165834017e       local-path-provisioner-648f6765c9-txczb
	55e4c7d9441ba       b1c9f9ef5f0c2       11 minutes ago      Running             registry-proxy                           0                   dbfd8a2965678       registry-proxy-qdl2b
	f3ab2ba81b895       b9e1e3849e070       11 minutes ago      Running             metrics-server                           0                   7779e927d3cc0       metrics-server-85b7d694d7-wlkcr
	11373ec0dad23       b6ab53fbfedaa       11 minutes ago      Running             minikube-ingress-dns                     0                   25d666aa48ee6       kube-ingress-dns-minikube
	e80f90b5d5ef0       5cec5320ed48c       11 minutes ago      Running             cloud-spanner-emulator                   0                   ad0219b9cb121       cloud-spanner-emulator-86bd5cbb97-whp5m
	6b591c0d25ec7       fcbf0ecf31958       11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   5ef9eabdf4f34       nvidia-device-plugin-daemonset-v4lvw
	61d2e3b41e535       6e38f40d628db       12 minutes ago      Running             storage-provisioner                      0                   c3fcdfcb3c777       storage-provisioner
	e93bcf6b41d34       d5e667c0f2bb6       12 minutes ago      Running             amd-gpu-device-plugin                    0                   dd63ea4bfdd23       amd-gpu-device-plugin-k6tpl
	836109d2ab5d3       52546a367cc9e       12 minutes ago      Running             coredns                                  0                   475cb9ba95a73       coredns-66bc5c9577-h4thg
	0daa3279505d6       fc25172553d79       12 minutes ago      Running             kube-proxy                               0                   85474e9f38355       kube-proxy-m9kg9
	05cee8f966b49       c80c8dbafe7dd       12 minutes ago      Running             kube-controller-manager                  0                   03c96ff8163c4       kube-controller-manager-addons-214022
	b4ca1f4c451a7       5f1f5298c888d       12 minutes ago      Running             etcd                                     0                   f69d756c4a41d       etcd-addons-214022
	84834930aaa27       7dd6aaa1717ab       12 minutes ago      Running             kube-scheduler                           0                   246bc566c0147       kube-scheduler-addons-214022
	da79537fc9aee       c3994bc696102       12 minutes ago      Running             kube-apiserver                           0                   6b21f01e5cdd5       kube-apiserver-addons-214022
	
	
	==> containerd <==
	Oct 13 14:07:29 addons-214022 containerd[816]: time="2025-10-13T14:07:29.378223523Z" level=info msg="PullImage \"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\""
	Oct 13 14:07:29 addons-214022 containerd[816]: time="2025-10-13T14:07:29.381899931Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:07:29 addons-214022 containerd[816]: time="2025-10-13T14:07:29.463081169Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:07:29 addons-214022 containerd[816]: time="2025-10-13T14:07:29.560597106Z" level=error msg="PullImage \"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\" failed" error="failed to pull and unpack image \"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:07:29 addons-214022 containerd[816]: time="2025-10-13T14:07:29.560781735Z" level=info msg="stop pulling image docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: active requests=0, bytes read=10983"
	Oct 13 14:07:32 addons-214022 containerd[816]: time="2025-10-13T14:07:32.376582674Z" level=info msg="PullImage \"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\""
	Oct 13 14:07:32 addons-214022 containerd[816]: time="2025-10-13T14:07:32.379877114Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:07:32 addons-214022 containerd[816]: time="2025-10-13T14:07:32.450693306Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:07:32 addons-214022 containerd[816]: time="2025-10-13T14:07:32.550806888Z" level=error msg="PullImage \"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\" failed" error="failed to pull and unpack image \"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:07:32 addons-214022 containerd[816]: time="2025-10-13T14:07:32.550858228Z" level=info msg="stop pulling image docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: active requests=0, bytes read=11047"
	Oct 13 14:07:41 addons-214022 containerd[816]: time="2025-10-13T14:07:41.376346110Z" level=info msg="PullImage \"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\""
	Oct 13 14:07:41 addons-214022 containerd[816]: time="2025-10-13T14:07:41.380535524Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:07:41 addons-214022 containerd[816]: time="2025-10-13T14:07:41.441674738Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:07:41 addons-214022 containerd[816]: time="2025-10-13T14:07:41.548184241Z" level=error msg="PullImage \"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\" failed" error="failed to pull and unpack image \"docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/marcnuri/yakd/manifests/sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:07:41 addons-214022 containerd[816]: time="2025-10-13T14:07:41.548231231Z" level=info msg="stop pulling image docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624: active requests=0, bytes read=10967"
	Oct 13 14:07:46 addons-214022 containerd[816]: time="2025-10-13T14:07:46.377713133Z" level=info msg="PullImage \"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\""
	Oct 13 14:07:46 addons-214022 containerd[816]: time="2025-10-13T14:07:46.380973964Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:07:46 addons-214022 containerd[816]: time="2025-10-13T14:07:46.461655366Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:07:46 addons-214022 containerd[816]: time="2025-10-13T14:07:46.569185261Z" level=error msg="PullImage \"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\" failed" error="failed to pull and unpack image \"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:07:46 addons-214022 containerd[816]: time="2025-10-13T14:07:46.569295724Z" level=info msg="stop pulling image docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: active requests=0, bytes read=11015"
	Oct 13 14:07:51 addons-214022 containerd[816]: time="2025-10-13T14:07:51.375915188Z" level=info msg="PullImage \"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\""
	Oct 13 14:07:51 addons-214022 containerd[816]: time="2025-10-13T14:07:51.379674648Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:07:51 addons-214022 containerd[816]: time="2025-10-13T14:07:51.459669040Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:07:51 addons-214022 containerd[816]: time="2025-10-13T14:07:51.568190049Z" level=error msg="PullImage \"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\" failed" error="failed to pull and unpack image \"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:07:51 addons-214022 containerd[816]: time="2025-10-13T14:07:51.568311107Z" level=info msg="stop pulling image docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: active requests=0, bytes read=11063"
	
	
	==> coredns [836109d2ab5d3098ccc6f029d103e56da702d50a57e73f14a97ae3b019a5fa1c] <==
	[INFO] 10.244.0.8:56742 - 33611 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000518017s
	[INFO] 10.244.0.8:35624 - 39153 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000207885s
	[INFO] 10.244.0.8:35624 - 42414 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000192241s
	[INFO] 10.244.0.8:35624 - 24510 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000107523s
	[INFO] 10.244.0.8:35624 - 57233 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000219245s
	[INFO] 10.244.0.8:35624 - 40816 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000067485s
	[INFO] 10.244.0.8:35624 - 6184 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000067607s
	[INFO] 10.244.0.8:35624 - 28138 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000211093s
	[INFO] 10.244.0.8:35624 - 44894 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000068313s
	[INFO] 10.244.0.8:55597 - 65097 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000301523s
	[INFO] 10.244.0.8:55597 - 25289 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000639332s
	[INFO] 10.244.0.8:55597 - 63744 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000131288s
	[INFO] 10.244.0.8:55597 - 39457 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000183623s
	[INFO] 10.244.0.8:55597 - 57594 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00024842s
	[INFO] 10.244.0.8:55597 - 41872 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000152078s
	[INFO] 10.244.0.8:55597 - 2596 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000240375s
	[INFO] 10.244.0.8:55597 - 25802 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000140595s
	[INFO] 10.244.0.8:39222 - 27198 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000280474s
	[INFO] 10.244.0.8:39222 - 63532 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000518892s
	[INFO] 10.244.0.8:39222 - 23176 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00016409s
	[INFO] 10.244.0.8:39222 - 57472 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000605374s
	[INFO] 10.244.0.8:39222 - 35124 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000240459s
	[INFO] 10.244.0.8:39222 - 54064 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000350383s
	[INFO] 10.244.0.8:39222 - 34748 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000118139s
	[INFO] 10.244.0.8:39222 - 52911 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000089933s
	
	
	==> describe nodes <==
	Name:               addons-214022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=addons-214022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214022
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-214022"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 13:55:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:08:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:06:52 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:06:52 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:06:52 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:06:52 +0000   Mon, 13 Oct 2025 13:55:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-214022
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 c368161c275346d2a9ea3f8a7f4ac862
	  System UUID:                c368161c-2753-46d2-a9ea-3f8a7f4ac862
	  Boot ID:                    687454d4-3e74-47c7-85c1-524150a13269
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-86bd5cbb97-whp5m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-lrthv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-78565c9fb4-bk2g7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-7jf8g    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 amd-gpu-device-plugin-k6tpl                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-h4thg                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-4jxqs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-214022                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-214022                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-214022       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-m9kg9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-214022                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-85b7d694d7-wlkcr             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-v4lvw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-66898fdd98-qpt8q                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-creds-764b6fb674-rsjlm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-qdl2b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-fcqg8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-pnqwn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-648f6765c9-txczb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-admission-6c447bd768-twzzh          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-admission-init-jln4n                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-controllers-6fd4f85cb8-wldls        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  volcano-system              volcano-scheduler-76c996c8bf-2ftbx          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-bl6xb              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             588Mi (15%)  426Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-214022 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-214022 event: Registered Node addons-214022 in Controller
	
	
	==> dmesg <==
	[Oct13 13:55] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000048] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003776] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.201159] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085735] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.112005] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.097255] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.134471] kauditd_printk_skb: 171 callbacks suppressed
	[Oct13 13:56] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.000102] kauditd_printk_skb: 285 callbacks suppressed
	[  +1.171734] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.188548] kauditd_printk_skb: 340 callbacks suppressed
	[ +10.023317] kauditd_printk_skb: 173 callbacks suppressed
	[ +11.926739] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.270838] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.901459] kauditd_printk_skb: 26 callbacks suppressed
	[Oct13 13:57] kauditd_printk_skb: 117 callbacks suppressed
	[  +1.255372] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.000037] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.136427] kauditd_printk_skb: 50 callbacks suppressed
	[  +4.193430] kauditd_printk_skb: 68 callbacks suppressed
	
	
	==> etcd [b4ca1f4c451a74c7ea64ca0e34512e160fbd260fd3969afb6e67fca08f49102b] <==
	{"level":"warn","ts":"2025-10-13T13:56:55.992985Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.897843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:56:55.993006Z","caller":"traceutil/trace.go:172","msg":"trace[113277730] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1173; }","duration":"161.928328ms","start":"2025-10-13T13:56:55.831073Z","end":"2025-10-13T13:56:55.993001Z","steps":["trace[113277730] 'agreement among raft nodes before linearized reading'  (duration: 161.883677ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.062597Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"344.36576ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:03.063002Z","caller":"traceutil/trace.go:172","msg":"trace[1485950550] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1254; }","duration":"344.775789ms","start":"2025-10-13T13:57:02.718212Z","end":"2025-10-13T13:57:03.062988Z","steps":["trace[1485950550] 'range keys from in-memory index tree'  (duration: 344.330962ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.063355Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"337.790718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:03.064000Z","caller":"traceutil/trace.go:172","msg":"trace[178016112] range","detail":"{range_begin:/registry/rolebindings; range_end:; response_count:0; response_revision:1254; }","duration":"338.436155ms","start":"2025-10-13T13:57:02.725551Z","end":"2025-10-13T13:57:03.063987Z","steps":["trace[178016112] 'range keys from in-memory index tree'  (duration: 337.736668ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.064070Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T13:57:02.725533Z","time spent":"338.514292ms","remote":"127.0.0.1:34032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":28,"request content":"key:\"/registry/rolebindings\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T13:57:03.064716Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"338.64157ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/gcp-auth/gcp-auth-certs\" limit:1 ","response":"range_response_count:1 size:1742"}
	{"level":"info","ts":"2025-10-13T13:57:03.064741Z","caller":"traceutil/trace.go:172","msg":"trace[263850864] range","detail":"{range_begin:/registry/secrets/gcp-auth/gcp-auth-certs; range_end:; response_count:1; response_revision:1254; }","duration":"338.671492ms","start":"2025-10-13T13:57:02.726062Z","end":"2025-10-13T13:57:03.064734Z","steps":["trace[263850864] 'range keys from in-memory index tree'  (duration: 338.518331ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.064758Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T13:57:02.726051Z","time spent":"338.702429ms","remote":"127.0.0.1:33502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":1765,"request content":"key:\"/registry/secrets/gcp-auth/gcp-auth-certs\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T13:57:03.065180Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"298.863834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-13T13:57:03.065245Z","caller":"traceutil/trace.go:172","msg":"trace[1044767909] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1254; }","duration":"298.932714ms","start":"2025-10-13T13:57:02.766306Z","end":"2025-10-13T13:57:03.065238Z","steps":["trace[1044767909] 'range keys from in-memory index tree'  (duration: 298.524712ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.065838Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"241.483456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:03.065887Z","caller":"traceutil/trace.go:172","msg":"trace[759368767] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"241.535842ms","start":"2025-10-13T13:57:02.824345Z","end":"2025-10-13T13:57:03.065880Z","steps":["trace[759368767] 'range keys from in-memory index tree'  (duration: 241.245545ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.066255Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"235.693879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:03.066329Z","caller":"traceutil/trace.go:172","msg":"trace[1337303940] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"235.769671ms","start":"2025-10-13T13:57:02.830551Z","end":"2025-10-13T13:57:03.066321Z","steps":["trace[1337303940] 'range keys from in-memory index tree'  (duration: 235.56325ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.066781Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.221636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:03.066824Z","caller":"traceutil/trace.go:172","msg":"trace[1790166720] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"236.26612ms","start":"2025-10-13T13:57:02.830551Z","end":"2025-10-13T13:57:03.066818Z","steps":["trace[1790166720] 'range keys from in-memory index tree'  (duration: 236.097045ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315015Z","caller":"traceutil/trace.go:172","msg":"trace[940649486] linearizableReadLoop","detail":"{readStateIndex:1392; appliedIndex:1392; }","duration":"127.017691ms","start":"2025-10-13T13:57:23.187982Z","end":"2025-10-13T13:57:23.314999Z","steps":["trace[940649486] 'read index received'  (duration: 127.006943ms)","trace[940649486] 'applied index is now lower than readState.Index'  (duration: 4.937µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T13:57:23.315177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.178772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:23.315206Z","caller":"traceutil/trace.go:172","msg":"trace[2128069664] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:1356; }","duration":"127.222714ms","start":"2025-10-13T13:57:23.187978Z","end":"2025-10-13T13:57:23.315201Z","steps":["trace[2128069664] 'agreement among raft nodes before linearized reading'  (duration: 127.149155ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315263Z","caller":"traceutil/trace.go:172","msg":"trace[1733438696] transaction","detail":"{read_only:false; response_revision:1357; number_of_response:1; }","duration":"135.233261ms","start":"2025-10-13T13:57:23.180019Z","end":"2025-10-13T13:57:23.315253Z","steps":["trace[1733438696] 'process raft request'  (duration: 135.141996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:05:52.467650Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1907}
	{"level":"info","ts":"2025-10-13T14:05:52.575208Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1907,"took":"105.568434ms","hash":1304879421,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4886528,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2025-10-13T14:05:52.575710Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1304879421,"revision":1907,"compact-revision":-1}
	
	
	==> gcp-auth [9abbc433e914f8d38b9388fd95feaaa5f77485b368fd6429e3e97937e1891abf] <==
	2025/10/13 13:57:29 GCP Auth Webhook started!
	
	
	==> kernel <==
	 14:08:22 up 12 min,  0 users,  load average: 1.05, 0.73, 0.61
	Linux addons-214022 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [da79537fc9aee4eda997318cc0aeef07f5a4e3bbd4aed4282ff9e486eecb0cd7] <==
	W1013 13:56:20.556099       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1013 13:56:20.739443       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.111.145.39"}
	W1013 13:56:31.494227       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 13:56:31.579977       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 13:56:31.716055       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 13:56:31.761888       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 13:56:31.865778       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 13:56:31.940875       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1013 13:56:31.979564       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1013 13:56:32.002782       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 13:56:32.041686       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1013 13:56:32.070182       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 13:56:32.085282       1 logging.go:55] [core] [Channel #307 SubChannel #308]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 13:56:32.113132       1 logging.go:55] [core] [Channel #311 SubChannel #312]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 13:56:32.126225       1 logging.go:55] [core] [Channel #315 SubChannel #316]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 13:56:47.811801       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 13:56:47.812959       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.151.157:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.151.157:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.151.157:443: connect: connection refused" logger="UnhandledError"
	E1013 13:56:47.813182       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1013 13:56:47.815427       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.151.157:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.151.157:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.151.157:443: connect: connection refused" logger="UnhandledError"
	E1013 13:56:47.820559       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.151.157:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.151.157:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.151.157:443: connect: connection refused" logger="UnhandledError"
	I1013 13:56:47.920032       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1013 14:05:54.490392       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [05cee8f966b4938e3d1606d404d9401b9949f288ba68c08a76c3856610945ee7] <==
	I1013 13:56:01.516009       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 13:56:01.517969       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 13:56:01.518111       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 13:56:01.518441       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 13:56:01.518602       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 13:56:01.519036       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 13:56:01.519045       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 13:56:01.519051       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 13:56:01.526996       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 13:56:01.527936       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 13:56:01.531895       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-214022" podCIDRs=["10.244.0.0/24"]
	E1013 13:56:10.480203       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1013 13:56:31.487266       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 13:56:31.488053       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I1013 13:56:31.488256       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch.volcano.sh"
	I1013 13:56:31.488287       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I1013 13:56:31.488351       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I1013 13:56:31.488442       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I1013 13:56:31.488660       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1013 13:56:31.488779       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I1013 13:56:31.488863       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1013 13:56:31.592108       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1013 13:56:31.627968       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1013 13:56:32.989776       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 13:56:33.144661       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [0daa3279505d674c83f3e6813f82b58744dbeede0c9d8a5f5e902c9d9cca7441] <==
	I1013 13:56:04.284946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 13:56:04.385972       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 13:56:04.386554       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.214"]
	E1013 13:56:04.387583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 13:56:04.791284       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 13:56:04.792086       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 13:56:04.792127       1 server_linux.go:132] "Using iptables Proxier"
	I1013 13:56:04.830526       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 13:56:04.832819       1 server.go:527] "Version info" version="v1.34.1"
	I1013 13:56:04.832853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 13:56:04.853725       1 config.go:200] "Starting service config controller"
	I1013 13:56:04.853757       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 13:56:04.853901       1 config.go:106] "Starting endpoint slice config controller"
	I1013 13:56:04.853927       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 13:56:04.854547       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 13:56:04.854575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 13:56:04.862975       1 config.go:309] "Starting node config controller"
	I1013 13:56:04.863007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 13:56:04.863015       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 13:56:04.956286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 13:56:04.956330       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 13:56:04.957110       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [84834930aaa277a8e849b685332e6fb4b453bbc88da065fb1d682e6c39de1c89] <==
	E1013 13:55:54.569998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:54.570036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:54.570113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:54.570148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:54.570176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 13:55:54.570210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:54.570246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 13:55:54.569635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:54.571687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.412211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:55.434014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 13:55:55.466581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 13:55:55.489914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.548770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:55.605071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 13:55:55.677154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:55.682700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 13:55:55.710259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:55.717675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 13:55:55.763499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 13:55:55.780817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:55.877364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:55.895577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 13:55:55.926098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1013 13:55:58.161609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 14:07:46 addons-214022 kubelet[1511]: E1013 14:07:46.569773    1511 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Oct 13 14:07:46 addons-214022 kubelet[1511]: E1013 14:07:46.570635    1511 kuberuntime_manager.go:1449] "Unhandled Error" err="container volcano-scheduler start failed in pod volcano-scheduler-76c996c8bf-2ftbx_volcano-system(8a6a9af2-1806-4afe-9eae-7268a53a5316): ErrImagePull: failed to pull and unpack image \"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:07:46 addons-214022 kubelet[1511]: E1013 14:07:46.570768    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-2ftbx" podUID="8a6a9af2-1806-4afe-9eae-7268a53a5316"
	Oct 13 14:07:48 addons-214022 kubelet[1511]: E1013 14:07:48.375774    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-jln4n" podUID="c6d9987d-10ec-4c54-b72f-58efa4ac8ce2"
	Oct 13 14:07:51 addons-214022 kubelet[1511]: E1013 14:07:51.568580    1511 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242"
	Oct 13 14:07:51 addons-214022 kubelet[1511]: E1013 14:07:51.568644    1511 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242"
	Oct 13 14:07:51 addons-214022 kubelet[1511]: E1013 14:07:51.568722    1511 kuberuntime_manager.go:1449] "Unhandled Error" err="container volcano-controllers start failed in pod volcano-controllers-6fd4f85cb8-wldls_volcano-system(b29e673f-59ae-4af7-aea1-c490ff7242cf): ErrImagePull: failed to pull and unpack image \"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:07:51 addons-214022 kubelet[1511]: E1013 14:07:51.568757    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-wldls" podUID="b29e673f-59ae-4af7-aea1-c490ff7242cf"
	Oct 13 14:07:52 addons-214022 kubelet[1511]: E1013 14:07:52.379734    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/marcnuri/yakd/manifests/sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-bl6xb" podUID="9b696edf-33b0-4b8c-a0c6-b17b9bb067fa"
	Oct 13 14:07:53 addons-214022 kubelet[1511]: I1013 14:07:53.376185    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:07:53 addons-214022 kubelet[1511]: E1013 14:07:53.378603    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:08:00 addons-214022 kubelet[1511]: E1013 14:08:00.375524    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-jln4n" podUID="c6d9987d-10ec-4c54-b72f-58efa4ac8ce2"
	Oct 13 14:08:01 addons-214022 kubelet[1511]: E1013 14:08:01.375993    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-2ftbx" podUID="8a6a9af2-1806-4afe-9eae-7268a53a5316"
	Oct 13 14:08:04 addons-214022 kubelet[1511]: I1013 14:08:04.375798    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:08:04 addons-214022 kubelet[1511]: E1013 14:08:04.378638    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/marcnuri/yakd/manifests/sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-bl6xb" podUID="9b696edf-33b0-4b8c-a0c6-b17b9bb067fa"
	Oct 13 14:08:04 addons-214022 kubelet[1511]: E1013 14:08:04.379031    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:08:07 addons-214022 kubelet[1511]: E1013 14:08:07.377620    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-wldls" podUID="b29e673f-59ae-4af7-aea1-c490ff7242cf"
	Oct 13 14:08:11 addons-214022 kubelet[1511]: E1013 14:08:11.378972    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-jln4n" podUID="c6d9987d-10ec-4c54-b72f-58efa4ac8ce2"
	Oct 13 14:08:12 addons-214022 kubelet[1511]: I1013 14:08:12.375592    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-k6tpl" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:08:15 addons-214022 kubelet[1511]: E1013 14:08:15.377323    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-scheduler/manifests/sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-scheduler-76c996c8bf-2ftbx" podUID="8a6a9af2-1806-4afe-9eae-7268a53a5316"
	Oct 13 14:08:15 addons-214022 kubelet[1511]: E1013 14:08:15.380473    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/marcnuri/yakd/manifests/sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-bl6xb" podUID="9b696edf-33b0-4b8c-a0c6-b17b9bb067fa"
	Oct 13 14:08:17 addons-214022 kubelet[1511]: I1013 14:08:17.379152    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:08:17 addons-214022 kubelet[1511]: E1013 14:08:17.380741    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:08:19 addons-214022 kubelet[1511]: E1013 14:08:19.375926    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-controller-manager:v1.13.0@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-controller-manager/manifests/sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-controllers-6fd4f85cb8-wldls" podUID="b29e673f-59ae-4af7-aea1-c490ff7242cf"
	Oct 13 14:08:22 addons-214022 kubelet[1511]: E1013 14:08:22.375463    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-jln4n" podUID="c6d9987d-10ec-4c54-b72f-58efa4ac8ce2"
	
	
	==> storage-provisioner [61d2e3b41e535c2d6e45412739c6b7e475d5a6aef5eb620041ffb9e4f7f53d5d] <==
	W1013 14:07:57.503852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:07:59.507962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:07:59.515304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:01.521668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:01.536166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:03.541255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:03.551632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:05.554302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:05.562477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:07.565849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:07.571759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:09.576726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:09.585510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:11.591709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:11.602051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:13.605510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:13.614055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:15.619179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:15.628136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:17.632954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:17.639354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:19.644327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:19.650093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:21.653923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:08:21.662741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm volcano-admission-6c447bd768-twzzh volcano-admission-init-jln4n volcano-controllers-6fd4f85cb8-wldls volcano-scheduler-76c996c8bf-2ftbx yakd-dashboard-5ff678cb9-bl6xb
helpers_test.go:282: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214022 describe pod ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm volcano-admission-6c447bd768-twzzh volcano-admission-init-jln4n volcano-controllers-6fd4f85cb8-wldls volcano-scheduler-76c996c8bf-2ftbx yakd-dashboard-5ff678cb9-bl6xb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214022 describe pod ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm volcano-admission-6c447bd768-twzzh volcano-admission-init-jln4n volcano-controllers-6fd4f85cb8-wldls volcano-scheduler-76c996c8bf-2ftbx yakd-dashboard-5ff678cb9-bl6xb: exit status 1 (81.504997ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rn6ng" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kvlpb" not found
	Error from server (NotFound): pods "registry-66898fdd98-qpt8q" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-rsjlm" not found
	Error from server (NotFound): pods "volcano-admission-6c447bd768-twzzh" not found
	Error from server (NotFound): pods "volcano-admission-init-jln4n" not found
	Error from server (NotFound): pods "volcano-controllers-6fd4f85cb8-wldls" not found
	Error from server (NotFound): pods "volcano-scheduler-76c996c8bf-2ftbx" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-bl6xb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214022 describe pod ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm volcano-admission-6c447bd768-twzzh volcano-admission-init-jln4n volcano-controllers-6fd4f85cb8-wldls volcano-scheduler-76c996c8bf-2ftbx yakd-dashboard-5ff678cb9-bl6xb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 addons disable volcano --alsologtostderr -v=1: (11.909674529s)
--- FAIL: TestAddons/serial/Volcano (375.04s)

                                                
                                    
x
+
TestAddons/parallel/Registry (363.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 12.212254ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
helpers_test.go:337: TestAddons/parallel/Registry: WARNING: pod list for "kube-system" "actual-registry=true" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:384: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:384: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
addons_test.go:384: TestAddons/parallel/Registry: showing logs for failed pods as of 2025-10-13 14:14:53.990618056 +0000 UTC m=+1184.931176444
addons_test.go:384: (dbg) Run:  kubectl --context addons-214022 describe po registry-66898fdd98-qpt8q -n kube-system
addons_test.go:384: (dbg) kubectl --context addons-214022 describe po registry-66898fdd98-qpt8q -n kube-system:
Name:             registry-66898fdd98-qpt8q
Namespace:        kube-system
Priority:         0
Service Account:  default
Node:             addons-214022/192.168.39.214
Start Time:       Mon, 13 Oct 2025 13:56:09 +0000
Labels:           actual-registry=true
addonmanager.kubernetes.io/mode=Reconcile
kubernetes.io/minikube-addons=registry
pod-template-hash=66898fdd98
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/registry-66898fdd98
Containers:
registry:
Container ID:   
Image:          docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d
Image ID:       
Port:           5000/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
REGISTRY_STORAGE_DELETE_ENABLED:  true
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4cq66 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4cq66:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason                           Age                   From               Message
----     ------                           ----                  ----               -------
Normal   Scheduled                        18m                   default-scheduler  Successfully assigned kube-system/registry-66898fdd98-qpt8q to addons-214022
Warning  Failed                           16m (x4 over 18m)     kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d": failed to pull and unpack image "docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           16m (x4 over 18m)     kubelet            Error: ErrImagePull
Warning  Failed                           16m (x6 over 18m)     kubelet            Error: ImagePullBackOff
Normal   Pulling                          15m (x5 over 18m)     kubelet            Pulling image "docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d"
Warning  FailedToRetrieveImagePullSecret  3m33s (x71 over 18m)  kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.
Normal   BackOff                          3m33s (x64 over 18m)  kubelet            Back-off pulling image "docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d"
addons_test.go:384: (dbg) Run:  kubectl --context addons-214022 logs registry-66898fdd98-qpt8q -n kube-system
addons_test.go:384: (dbg) Non-zero exit: kubectl --context addons-214022 logs registry-66898fdd98-qpt8q -n kube-system: exit status 1 (73.130857ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "registry" in pod "registry-66898fdd98-qpt8q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:384: kubectl --context addons-214022 logs registry-66898fdd98-qpt8q -n kube-system: exit status 1
addons_test.go:385: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Registry]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214022 -n addons-214022
helpers_test.go:252: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 logs -n 25: (1.393399795s)
helpers_test.go:260: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-130651 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                          │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ -o=json --download-only -p download-only-459703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                          │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ --download-only -p binary-mirror-039949 --alsologtostderr --binary-mirror http://127.0.0.1:46019 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ -p binary-mirror-039949                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ addons  │ enable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ start   │ -p addons-214022 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 14:02 UTC │
	│ addons  │ addons-214022 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ addons-214022 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ enable headlamp -p addons-214022 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ addons-214022 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:11 UTC │ 13 Oct 25 14:11 UTC │
	│ addons  │ addons-214022 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:13 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ addons-214022 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ addons-214022 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 13:55:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 13:55:20.628679 1815551 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:55:20.628995 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629006 1815551 out.go:374] Setting ErrFile to fd 2...
	I1013 13:55:20.629013 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629212 1815551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 13:55:20.629832 1815551 out.go:368] Setting JSON to false
	I1013 13:55:20.630822 1815551 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":20269,"bootTime":1760343452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:55:20.630927 1815551 start.go:141] virtualization: kvm guest
	I1013 13:55:20.633155 1815551 out.go:179] * [addons-214022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 13:55:20.634757 1815551 notify.go:220] Checking for updates...
	I1013 13:55:20.634845 1815551 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 13:55:20.636374 1815551 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:55:20.637880 1815551 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:55:20.639342 1815551 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:20.640732 1815551 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 13:55:20.642003 1815551 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 13:55:20.643600 1815551 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:55:20.674859 1815551 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 13:55:20.676415 1815551 start.go:305] selected driver: kvm2
	I1013 13:55:20.676432 1815551 start.go:925] validating driver "kvm2" against <nil>
	I1013 13:55:20.676444 1815551 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 13:55:20.677121 1815551 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.677210 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.691866 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.691903 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.705734 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.705799 1815551 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 13:55:20.706090 1815551 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:55:20.706122 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:20.706178 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:20.706190 1815551 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 13:55:20.706245 1815551 start.go:349] cluster config:
	{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:20.706362 1815551 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.708302 1815551 out.go:179] * Starting "addons-214022" primary control-plane node in "addons-214022" cluster
	I1013 13:55:20.709605 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:20.709652 1815551 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 13:55:20.709667 1815551 cache.go:58] Caching tarball of preloaded images
	I1013 13:55:20.709799 1815551 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 13:55:20.709812 1815551 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 13:55:20.710191 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:20.710220 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json: {Name:mkc10ba1ef1459bd83ba3e9e0ba7c33fe1be6a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:20.710388 1815551 start.go:360] acquireMachinesLock for addons-214022: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 13:55:20.710457 1815551 start.go:364] duration metric: took 51.101µs to acquireMachinesLock for "addons-214022"
	I1013 13:55:20.710480 1815551 start.go:93] Provisioning new machine with config: &{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:55:20.710555 1815551 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 13:55:20.713031 1815551 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1013 13:55:20.713207 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:55:20.713262 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:55:20.727020 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I1013 13:55:20.727515 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:55:20.728150 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:55:20.728183 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:55:20.728607 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:55:20.728846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:20.729028 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:20.729259 1815551 start.go:159] libmachine.API.Create for "addons-214022" (driver="kvm2")
	I1013 13:55:20.729295 1815551 client.go:168] LocalClient.Create starting
	I1013 13:55:20.729337 1815551 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem
	I1013 13:55:20.759138 1815551 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem
	I1013 13:55:21.004098 1815551 main.go:141] libmachine: Running pre-create checks...
	I1013 13:55:21.004128 1815551 main.go:141] libmachine: (addons-214022) Calling .PreCreateCheck
	I1013 13:55:21.004821 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:21.005397 1815551 main.go:141] libmachine: Creating machine...
	I1013 13:55:21.005413 1815551 main.go:141] libmachine: (addons-214022) Calling .Create
	I1013 13:55:21.005675 1815551 main.go:141] libmachine: (addons-214022) creating domain...
	I1013 13:55:21.005726 1815551 main.go:141] libmachine: (addons-214022) creating network...
	I1013 13:55:21.007263 1815551 main.go:141] libmachine: (addons-214022) DBG | found existing default network
	I1013 13:55:21.007531 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.007563 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>default</name>
	I1013 13:55:21.007591 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 13:55:21.007612 1815551 main.go:141] libmachine: (addons-214022) DBG |   <forward mode='nat'>
	I1013 13:55:21.007625 1815551 main.go:141] libmachine: (addons-214022) DBG |     <nat>
	I1013 13:55:21.007636 1815551 main.go:141] libmachine: (addons-214022) DBG |       <port start='1024' end='65535'/>
	I1013 13:55:21.007652 1815551 main.go:141] libmachine: (addons-214022) DBG |     </nat>
	I1013 13:55:21.007667 1815551 main.go:141] libmachine: (addons-214022) DBG |   </forward>
	I1013 13:55:21.007675 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 13:55:21.007684 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 13:55:21.007690 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 13:55:21.007709 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.007733 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 13:55:21.007742 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.007750 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.007756 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.007766 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008295 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.008109 1815579 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045c0}
	I1013 13:55:21.008354 1815551 main.go:141] libmachine: (addons-214022) DBG | defining private network:
	I1013 13:55:21.008379 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008393 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.008408 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.008433 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.008451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.008458 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.008463 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.008471 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.008475 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.008480 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.008486 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.014811 1815551 main.go:141] libmachine: (addons-214022) DBG | creating private network mk-addons-214022 192.168.39.0/24...
	I1013 13:55:21.089953 1815551 main.go:141] libmachine: (addons-214022) DBG | private network mk-addons-214022 192.168.39.0/24 created
	I1013 13:55:21.090269 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.090299 1815551 main.go:141] libmachine: (addons-214022) setting up store path in /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.090308 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.090321 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>9289d330-dce4-4691-9e5d-0346b93e6814</uuid>
	I1013 13:55:21.090330 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1013 13:55:21.090340 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:03:10:f8'/>
	I1013 13:55:21.090351 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.090359 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.090366 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.090372 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.090379 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.090384 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.090402 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.090414 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.090424 1815551 main.go:141] libmachine: (addons-214022) building disk image from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 13:55:21.090432 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.090246 1815579 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.090457 1815551 main.go:141] libmachine: (addons-214022) Downloading /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 13:55:21.389435 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.389286 1815579 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa...
	I1013 13:55:21.573462 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573304 1815579 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk...
	I1013 13:55:21.573488 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing magic tar header
	I1013 13:55:21.573505 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing SSH key tar header
	I1013 13:55:21.573516 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573436 1815579 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.573528 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022
	I1013 13:55:21.573596 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 (perms=drwx------)
	I1013 13:55:21.573620 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines
	I1013 13:55:21.573632 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines (perms=drwxr-xr-x)
	I1013 13:55:21.573648 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube (perms=drwxr-xr-x)
	I1013 13:55:21.573659 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975 (perms=drwxrwxr-x)
	I1013 13:55:21.573667 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 13:55:21.573674 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 13:55:21.573684 1815551 main.go:141] libmachine: (addons-214022) defining domain...
	I1013 13:55:21.573701 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.573728 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975
	I1013 13:55:21.573769 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 13:55:21.573794 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins
	I1013 13:55:21.573812 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home
	I1013 13:55:21.573827 1815551 main.go:141] libmachine: (addons-214022) DBG | skipping /home - not owner
	I1013 13:55:21.574972 1815551 main.go:141] libmachine: (addons-214022) defining domain using XML: 
	I1013 13:55:21.574985 1815551 main.go:141] libmachine: (addons-214022) <domain type='kvm'>
	I1013 13:55:21.574990 1815551 main.go:141] libmachine: (addons-214022)   <name>addons-214022</name>
	I1013 13:55:21.575002 1815551 main.go:141] libmachine: (addons-214022)   <memory unit='MiB'>4096</memory>
	I1013 13:55:21.575009 1815551 main.go:141] libmachine: (addons-214022)   <vcpu>2</vcpu>
	I1013 13:55:21.575015 1815551 main.go:141] libmachine: (addons-214022)   <features>
	I1013 13:55:21.575023 1815551 main.go:141] libmachine: (addons-214022)     <acpi/>
	I1013 13:55:21.575032 1815551 main.go:141] libmachine: (addons-214022)     <apic/>
	I1013 13:55:21.575059 1815551 main.go:141] libmachine: (addons-214022)     <pae/>
	I1013 13:55:21.575077 1815551 main.go:141] libmachine: (addons-214022)   </features>
	I1013 13:55:21.575100 1815551 main.go:141] libmachine: (addons-214022)   <cpu mode='host-passthrough'>
	I1013 13:55:21.575110 1815551 main.go:141] libmachine: (addons-214022)   </cpu>
	I1013 13:55:21.575122 1815551 main.go:141] libmachine: (addons-214022)   <os>
	I1013 13:55:21.575132 1815551 main.go:141] libmachine: (addons-214022)     <type>hvm</type>
	I1013 13:55:21.575141 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='cdrom'/>
	I1013 13:55:21.575151 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='hd'/>
	I1013 13:55:21.575162 1815551 main.go:141] libmachine: (addons-214022)     <bootmenu enable='no'/>
	I1013 13:55:21.575179 1815551 main.go:141] libmachine: (addons-214022)   </os>
	I1013 13:55:21.575186 1815551 main.go:141] libmachine: (addons-214022)   <devices>
	I1013 13:55:21.575192 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='cdrom'>
	I1013 13:55:21.575201 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.575208 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.575216 1815551 main.go:141] libmachine: (addons-214022)       <readonly/>
	I1013 13:55:21.575224 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575234 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='disk'>
	I1013 13:55:21.575251 1815551 main.go:141] libmachine: (addons-214022)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 13:55:21.575272 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.575286 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.575296 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575307 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575317 1815551 main.go:141] libmachine: (addons-214022)       <source network='mk-addons-214022'/>
	I1013 13:55:21.575329 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575339 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575356 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575374 1815551 main.go:141] libmachine: (addons-214022)       <source network='default'/>
	I1013 13:55:21.575392 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575408 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575416 1815551 main.go:141] libmachine: (addons-214022)     <serial type='pty'>
	I1013 13:55:21.575422 1815551 main.go:141] libmachine: (addons-214022)       <target port='0'/>
	I1013 13:55:21.575433 1815551 main.go:141] libmachine: (addons-214022)     </serial>
	I1013 13:55:21.575443 1815551 main.go:141] libmachine: (addons-214022)     <console type='pty'>
	I1013 13:55:21.575453 1815551 main.go:141] libmachine: (addons-214022)       <target type='serial' port='0'/>
	I1013 13:55:21.575463 1815551 main.go:141] libmachine: (addons-214022)     </console>
	I1013 13:55:21.575475 1815551 main.go:141] libmachine: (addons-214022)     <rng model='virtio'>
	I1013 13:55:21.575486 1815551 main.go:141] libmachine: (addons-214022)       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.575495 1815551 main.go:141] libmachine: (addons-214022)     </rng>
	I1013 13:55:21.575507 1815551 main.go:141] libmachine: (addons-214022)   </devices>
	I1013 13:55:21.575519 1815551 main.go:141] libmachine: (addons-214022) </domain>
	I1013 13:55:21.575530 1815551 main.go:141] libmachine: (addons-214022) 
	I1013 13:55:21.580981 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:54:97:7f in network default
	I1013 13:55:21.581682 1815551 main.go:141] libmachine: (addons-214022) starting domain...
	I1013 13:55:21.581698 1815551 main.go:141] libmachine: (addons-214022) ensuring networks are active...
	I1013 13:55:21.581746 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:21.582514 1815551 main.go:141] libmachine: (addons-214022) Ensuring network default is active
	I1013 13:55:21.583076 1815551 main.go:141] libmachine: (addons-214022) Ensuring network mk-addons-214022 is active
	I1013 13:55:21.583880 1815551 main.go:141] libmachine: (addons-214022) getting domain XML...
	I1013 13:55:21.585201 1815551 main.go:141] libmachine: (addons-214022) DBG | starting domain XML:
	I1013 13:55:21.585220 1815551 main.go:141] libmachine: (addons-214022) DBG | <domain type='kvm'>
	I1013 13:55:21.585231 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>addons-214022</name>
	I1013 13:55:21.585241 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c368161c-2753-46d2-a9ea-3f8a7f4ac862</uuid>
	I1013 13:55:21.585272 1815551 main.go:141] libmachine: (addons-214022) DBG |   <memory unit='KiB'>4194304</memory>
	I1013 13:55:21.585285 1815551 main.go:141] libmachine: (addons-214022) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1013 13:55:21.585295 1815551 main.go:141] libmachine: (addons-214022) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 13:55:21.585304 1815551 main.go:141] libmachine: (addons-214022) DBG |   <os>
	I1013 13:55:21.585317 1815551 main.go:141] libmachine: (addons-214022) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 13:55:21.585324 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='cdrom'/>
	I1013 13:55:21.585329 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='hd'/>
	I1013 13:55:21.585345 1815551 main.go:141] libmachine: (addons-214022) DBG |     <bootmenu enable='no'/>
	I1013 13:55:21.585358 1815551 main.go:141] libmachine: (addons-214022) DBG |   </os>
	I1013 13:55:21.585369 1815551 main.go:141] libmachine: (addons-214022) DBG |   <features>
	I1013 13:55:21.585391 1815551 main.go:141] libmachine: (addons-214022) DBG |     <acpi/>
	I1013 13:55:21.585403 1815551 main.go:141] libmachine: (addons-214022) DBG |     <apic/>
	I1013 13:55:21.585411 1815551 main.go:141] libmachine: (addons-214022) DBG |     <pae/>
	I1013 13:55:21.585428 1815551 main.go:141] libmachine: (addons-214022) DBG |   </features>
	I1013 13:55:21.585439 1815551 main.go:141] libmachine: (addons-214022) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 13:55:21.585443 1815551 main.go:141] libmachine: (addons-214022) DBG |   <clock offset='utc'/>
	I1013 13:55:21.585451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 13:55:21.585456 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_reboot>restart</on_reboot>
	I1013 13:55:21.585464 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_crash>destroy</on_crash>
	I1013 13:55:21.585467 1815551 main.go:141] libmachine: (addons-214022) DBG |   <devices>
	I1013 13:55:21.585476 1815551 main.go:141] libmachine: (addons-214022) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 13:55:21.585483 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='cdrom'>
	I1013 13:55:21.585490 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw'/>
	I1013 13:55:21.585499 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.585530 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.585553 1815551 main.go:141] libmachine: (addons-214022) DBG |       <readonly/>
	I1013 13:55:21.585566 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 13:55:21.585582 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585595 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='disk'>
	I1013 13:55:21.585608 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 13:55:21.585626 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.585638 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.585652 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 13:55:21.585666 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585680 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 13:55:21.585693 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 13:55:21.585706 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585726 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 13:55:21.585741 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 13:55:21.585760 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 13:55:21.585769 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585773 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585778 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:45:c6:7b'/>
	I1013 13:55:21.585783 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='mk-addons-214022'/>
	I1013 13:55:21.585787 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585793 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 13:55:21.585797 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585801 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585806 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:54:97:7f'/>
	I1013 13:55:21.585810 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='default'/>
	I1013 13:55:21.585815 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585823 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 13:55:21.585828 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585834 1815551 main.go:141] libmachine: (addons-214022) DBG |     <serial type='pty'>
	I1013 13:55:21.585840 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='isa-serial' port='0'>
	I1013 13:55:21.585849 1815551 main.go:141] libmachine: (addons-214022) DBG |         <model name='isa-serial'/>
	I1013 13:55:21.585856 1815551 main.go:141] libmachine: (addons-214022) DBG |       </target>
	I1013 13:55:21.585860 1815551 main.go:141] libmachine: (addons-214022) DBG |     </serial>
	I1013 13:55:21.585867 1815551 main.go:141] libmachine: (addons-214022) DBG |     <console type='pty'>
	I1013 13:55:21.585871 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='serial' port='0'/>
	I1013 13:55:21.585878 1815551 main.go:141] libmachine: (addons-214022) DBG |     </console>
	I1013 13:55:21.585882 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='mouse' bus='ps2'/>
	I1013 13:55:21.585888 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 13:55:21.585895 1815551 main.go:141] libmachine: (addons-214022) DBG |     <audio id='1' type='none'/>
	I1013 13:55:21.585900 1815551 main.go:141] libmachine: (addons-214022) DBG |     <memballoon model='virtio'>
	I1013 13:55:21.585905 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 13:55:21.585912 1815551 main.go:141] libmachine: (addons-214022) DBG |     </memballoon>
	I1013 13:55:21.585920 1815551 main.go:141] libmachine: (addons-214022) DBG |     <rng model='virtio'>
	I1013 13:55:21.585937 1815551 main.go:141] libmachine: (addons-214022) DBG |       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.585942 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 13:55:21.585947 1815551 main.go:141] libmachine: (addons-214022) DBG |     </rng>
	I1013 13:55:21.585950 1815551 main.go:141] libmachine: (addons-214022) DBG |   </devices>
	I1013 13:55:21.585955 1815551 main.go:141] libmachine: (addons-214022) DBG | </domain>
	I1013 13:55:21.585958 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.998506 1815551 main.go:141] libmachine: (addons-214022) waiting for domain to start...
	I1013 13:55:21.999992 1815551 main.go:141] libmachine: (addons-214022) domain is now running
	I1013 13:55:22.000011 1815551 main.go:141] libmachine: (addons-214022) waiting for IP...
	I1013 13:55:22.000803 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.001255 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.001289 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.001544 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.001627 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.001556 1815579 retry.go:31] will retry after 233.588452ms: waiting for domain to come up
	I1013 13:55:22.236968 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.237478 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.237508 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.237876 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.237928 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.237848 1815579 retry.go:31] will retry after 300.8157ms: waiting for domain to come up
	I1013 13:55:22.540639 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.541271 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.541302 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.541621 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.541683 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.541605 1815579 retry.go:31] will retry after 377.651668ms: waiting for domain to come up
	I1013 13:55:22.921184 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.921783 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.921814 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.922148 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.922237 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.922151 1815579 retry.go:31] will retry after 510.251488ms: waiting for domain to come up
	I1013 13:55:23.433846 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:23.434356 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:23.434384 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:23.434622 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:23.434651 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:23.434592 1815579 retry.go:31] will retry after 738.765721ms: waiting for domain to come up
	I1013 13:55:24.174730 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:24.175220 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:24.175261 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:24.175609 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:24.175645 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:24.175615 1815579 retry.go:31] will retry after 941.377797ms: waiting for domain to come up
	I1013 13:55:25.118416 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.119134 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.119161 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.119505 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.119531 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.119464 1815579 retry.go:31] will retry after 715.698221ms: waiting for domain to come up
	I1013 13:55:25.837169 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.837602 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.837632 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.837919 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.837956 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.837912 1815579 retry.go:31] will retry after 1.477632519s: waiting for domain to come up
	I1013 13:55:27.317869 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:27.318416 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:27.318445 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:27.318730 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:27.318828 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:27.318742 1815579 retry.go:31] will retry after 1.752025896s: waiting for domain to come up
	I1013 13:55:29.072255 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:29.072804 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:29.072827 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:29.073152 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:29.073218 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:29.073146 1815579 retry.go:31] will retry after 1.890403935s: waiting for domain to come up
	I1013 13:55:30.965205 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:30.965861 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:30.965889 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:30.966181 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:30.966249 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:30.966169 1815579 retry.go:31] will retry after 2.015469115s: waiting for domain to come up
	I1013 13:55:32.984641 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:32.985205 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:32.985254 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:32.985538 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:32.985566 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:32.985483 1815579 retry.go:31] will retry after 3.162648802s: waiting for domain to come up
	I1013 13:55:36.149428 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150058 1815551 main.go:141] libmachine: (addons-214022) found domain IP: 192.168.39.214
	I1013 13:55:36.150084 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has current primary IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150093 1815551 main.go:141] libmachine: (addons-214022) reserving static IP address...
	I1013 13:55:36.150509 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find host DHCP lease matching {name: "addons-214022", mac: "52:54:00:45:c6:7b", ip: "192.168.39.214"} in network mk-addons-214022
	I1013 13:55:36.359631 1815551 main.go:141] libmachine: (addons-214022) DBG | Getting to WaitForSSH function...
	I1013 13:55:36.359656 1815551 main.go:141] libmachine: (addons-214022) reserved static IP address 192.168.39.214 for domain addons-214022
	I1013 13:55:36.359708 1815551 main.go:141] libmachine: (addons-214022) waiting for SSH...
	I1013 13:55:36.362970 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363545 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.363578 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363975 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH client type: external
	I1013 13:55:36.364005 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa (-rw-------)
	I1013 13:55:36.364071 1815551 main.go:141] libmachine: (addons-214022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 13:55:36.364096 1815551 main.go:141] libmachine: (addons-214022) DBG | About to run SSH command:
	I1013 13:55:36.364112 1815551 main.go:141] libmachine: (addons-214022) DBG | exit 0
	I1013 13:55:36.500938 1815551 main.go:141] libmachine: (addons-214022) DBG | SSH cmd err, output: <nil>: 
	I1013 13:55:36.501251 1815551 main.go:141] libmachine: (addons-214022) domain creation complete
	I1013 13:55:36.501689 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:36.502339 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502549 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502694 1815551 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1013 13:55:36.502705 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:55:36.504172 1815551 main.go:141] libmachine: Detecting operating system of created instance...
	I1013 13:55:36.504188 1815551 main.go:141] libmachine: Waiting for SSH to be available...
	I1013 13:55:36.504195 1815551 main.go:141] libmachine: Getting to WaitForSSH function...
	I1013 13:55:36.504201 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.507156 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507596 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.507626 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507811 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.508003 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508123 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508334 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.508503 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.508771 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.508786 1815551 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1013 13:55:36.609679 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.609706 1815551 main.go:141] libmachine: Detecting the provisioner...
	I1013 13:55:36.609725 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.612870 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613343 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.613380 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613602 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.613846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614017 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614155 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.614343 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.614556 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.614568 1815551 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1013 13:55:36.717397 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1013 13:55:36.717477 1815551 main.go:141] libmachine: found compatible host: buildroot
	I1013 13:55:36.717487 1815551 main.go:141] libmachine: Provisioning with buildroot...
	I1013 13:55:36.717495 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.717788 1815551 buildroot.go:166] provisioning hostname "addons-214022"
	I1013 13:55:36.717829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.718042 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.721497 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.721988 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.722027 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.722260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.722429 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722542 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722660 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.722864 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.723104 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.723120 1815551 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214022 && echo "addons-214022" | sudo tee /etc/hostname
	I1013 13:55:36.853529 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214022
	
	I1013 13:55:36.853563 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.856617 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857071 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.857100 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.857522 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857852 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.858072 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.858351 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.858378 1815551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214022/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 13:55:36.978438 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.978492 1815551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 13:55:36.978561 1815551 buildroot.go:174] setting up certificates
	I1013 13:55:36.978581 1815551 provision.go:84] configureAuth start
	I1013 13:55:36.978601 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.978932 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:36.982111 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982557 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.982587 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982769 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.985722 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986132 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.986153 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986337 1815551 provision.go:143] copyHostCerts
	I1013 13:55:36.986421 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 13:55:36.986610 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 13:55:36.986700 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 13:55:36.986789 1815551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.addons-214022 san=[127.0.0.1 192.168.39.214 addons-214022 localhost minikube]
	I1013 13:55:37.044634 1815551 provision.go:177] copyRemoteCerts
	I1013 13:55:37.044706 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 13:55:37.044750 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.047881 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048238 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.048268 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048531 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.048757 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.048938 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.049093 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.132357 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 13:55:37.163230 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 13:55:37.193519 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 13:55:37.228041 1815551 provision.go:87] duration metric: took 249.44117ms to configureAuth
	I1013 13:55:37.228073 1815551 buildroot.go:189] setting minikube options for container-runtime
	I1013 13:55:37.228284 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:55:37.228308 1815551 main.go:141] libmachine: Checking connection to Docker...
	I1013 13:55:37.228319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetURL
	I1013 13:55:37.229621 1815551 main.go:141] libmachine: (addons-214022) DBG | using libvirt version 8000000
	I1013 13:55:37.231977 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232573 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.232594 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232944 1815551 main.go:141] libmachine: Docker is up and running!
	I1013 13:55:37.232959 1815551 main.go:141] libmachine: Reticulating splines...
	I1013 13:55:37.232967 1815551 client.go:171] duration metric: took 16.503662992s to LocalClient.Create
	I1013 13:55:37.232989 1815551 start.go:167] duration metric: took 16.503732898s to libmachine.API.Create "addons-214022"
	I1013 13:55:37.232996 1815551 start.go:293] postStartSetup for "addons-214022" (driver="kvm2")
	I1013 13:55:37.233004 1815551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 13:55:37.233019 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.233334 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 13:55:37.233364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.236079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236495 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.236524 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.237136 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.237319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.237840 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.320344 1815551 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 13:55:37.325903 1815551 info.go:137] Remote host: Buildroot 2025.02
	I1013 13:55:37.325945 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 13:55:37.326098 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 13:55:37.326125 1815551 start.go:296] duration metric: took 93.124024ms for postStartSetup
	I1013 13:55:37.326165 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:37.326907 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.329757 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330258 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.330288 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330612 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:37.330830 1815551 start.go:128] duration metric: took 16.620261949s to createHost
	I1013 13:55:37.330855 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.334094 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334644 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.334674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334903 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.335118 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335505 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.335738 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:37.336080 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:37.336099 1815551 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 13:55:37.453534 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760363737.403582342
	
	I1013 13:55:37.453567 1815551 fix.go:216] guest clock: 1760363737.403582342
	I1013 13:55:37.453576 1815551 fix.go:229] Guest: 2025-10-13 13:55:37.403582342 +0000 UTC Remote: 2025-10-13 13:55:37.33084379 +0000 UTC m=+16.741419072 (delta=72.738552ms)
	I1013 13:55:37.453601 1815551 fix.go:200] guest clock delta is within tolerance: 72.738552ms
	I1013 13:55:37.453614 1815551 start.go:83] releasing machines lock for "addons-214022", held for 16.74313679s
	I1013 13:55:37.453644 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.453996 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.457079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457464 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.457495 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457681 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458199 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458359 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458457 1815551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 13:55:37.458521 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.458571 1815551 ssh_runner.go:195] Run: cat /version.json
	I1013 13:55:37.458594 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.461592 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462001 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462030 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462059 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462230 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.462419 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.462580 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.462613 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462638 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462750 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.462894 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.463074 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.463216 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.463355 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.568362 1815551 ssh_runner.go:195] Run: systemctl --version
	I1013 13:55:37.574961 1815551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 13:55:37.581570 1815551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 13:55:37.581652 1815551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 13:55:37.601744 1815551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 13:55:37.601771 1815551 start.go:495] detecting cgroup driver to use...
	I1013 13:55:37.601855 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 13:55:37.636399 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 13:55:37.653284 1815551 docker.go:218] disabling cri-docker service (if available) ...
	I1013 13:55:37.653349 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 13:55:37.671035 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 13:55:37.687997 1815551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 13:55:37.835046 1815551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 13:55:38.036660 1815551 docker.go:234] disabling docker service ...
	I1013 13:55:38.036785 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 13:55:38.054634 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 13:55:38.070992 1815551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 13:55:38.226219 1815551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 13:55:38.375133 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 13:55:38.391629 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 13:55:38.415622 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 13:55:38.428382 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 13:55:38.441166 1815551 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 13:55:38.441271 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 13:55:38.454185 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.467219 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 13:55:38.480016 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.493623 1815551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 13:55:38.507533 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 13:55:38.520643 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 13:55:38.533755 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 13:55:38.546971 1815551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 13:55:38.557881 1815551 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 13:55:38.557958 1815551 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 13:55:38.578224 1815551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 13:55:38.590424 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:38.732726 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:38.770576 1815551 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 13:55:38.770707 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:38.776353 1815551 retry.go:31] will retry after 1.261164496s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 13:55:40.038886 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:40.045830 1815551 start.go:563] Will wait 60s for crictl version
	I1013 13:55:40.045914 1815551 ssh_runner.go:195] Run: which crictl
	I1013 13:55:40.050941 1815551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 13:55:40.093318 1815551 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 13:55:40.093432 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.123924 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.255787 1815551 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 13:55:40.331568 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:40.334892 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335313 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:40.335337 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335632 1815551 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 13:55:40.341286 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:40.357723 1815551 kubeadm.go:883] updating cluster {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 13:55:40.357874 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:40.357947 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:40.395630 1815551 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 13:55:40.395736 1815551 ssh_runner.go:195] Run: which lz4
	I1013 13:55:40.400778 1815551 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 13:55:40.406306 1815551 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 13:55:40.406344 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (409015552 bytes)
	I1013 13:55:41.943253 1815551 containerd.go:563] duration metric: took 1.54249613s to copy over tarball
	I1013 13:55:41.943351 1815551 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 13:55:43.492564 1815551 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.549175583s)
	I1013 13:55:43.492596 1815551 containerd.go:570] duration metric: took 1.549300388s to extract the tarball
	I1013 13:55:43.492604 1815551 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 13:55:43.534655 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:43.680421 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:43.727538 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.770225 1815551 retry.go:31] will retry after 129.297012ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T13:55:43Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1013 13:55:43.900675 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.942782 1815551 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 13:55:43.942818 1815551 cache_images.go:85] Images are preloaded, skipping loading
	I1013 13:55:43.942831 1815551 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.34.1 containerd true true} ...
	I1013 13:55:43.942973 1815551 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 13:55:43.943036 1815551 ssh_runner.go:195] Run: sudo crictl info
	I1013 13:55:43.983500 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:43.983527 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:43.983547 1815551 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 13:55:43.983572 1815551 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214022 NodeName:addons-214022 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 13:55:43.983683 1815551 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-214022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.214"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 13:55:43.983786 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 13:55:43.997492 1815551 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 13:55:43.997569 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 13:55:44.009940 1815551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1013 13:55:44.032456 1815551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 13:55:44.055201 1815551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1013 13:55:44.077991 1815551 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I1013 13:55:44.082755 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:44.102001 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:44.250454 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:55:44.271759 1815551 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022 for IP: 192.168.39.214
	I1013 13:55:44.271804 1815551 certs.go:195] generating shared ca certs ...
	I1013 13:55:44.271849 1815551 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.272058 1815551 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 13:55:44.515410 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt ...
	I1013 13:55:44.515443 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt: {Name:mk7e93844bf7a5315c584d29c143e2135009c9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515626 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key ...
	I1013 13:55:44.515639 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key: {Name:mk2370dd9470838be70f5ff73870ee78eaf49615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515736 1815551 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 13:55:44.688770 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt ...
	I1013 13:55:44.688804 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt: {Name:mk17069980c2ce75e576b93cf8d09a188d68e9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.688989 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key ...
	I1013 13:55:44.689002 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key: {Name:mk6b5175fc3e29304600d26ae322daa345a1af96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.689075 1815551 certs.go:257] generating profile certs ...
	I1013 13:55:44.689137 1815551 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key
	I1013 13:55:44.689163 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt with IP's: []
	I1013 13:55:45.249037 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt ...
	I1013 13:55:45.249073 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: {Name:mk280462c7f89663f3ca7afb3f0492dd2b0ee285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249251 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key ...
	I1013 13:55:45.249263 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key: {Name:mk559b21297b9d07a442f449010608571723a06a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249350 1815551 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114
	I1013 13:55:45.249370 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I1013 13:55:45.485539 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 ...
	I1013 13:55:45.485568 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114: {Name:mkd1f4b4fe453f9f52532a7d0522a77f6292f9c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485740 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 ...
	I1013 13:55:45.485755 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114: {Name:mk7e630cb0d73800acc236df973e9041d684cea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485833 1815551 certs.go:382] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt
	I1013 13:55:45.485922 1815551 certs.go:386] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key
	I1013 13:55:45.485980 1815551 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key
	I1013 13:55:45.485998 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt with IP's: []
	I1013 13:55:45.781914 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt ...
	I1013 13:55:45.781958 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt: {Name:mk2c046b91ab288417107efe4a8ee37eb796f0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782135 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key ...
	I1013 13:55:45.782151 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key: {Name:mk11ba110c07b71583dc1e7a37e3c7830733bcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782356 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 13:55:45.782394 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 13:55:45.782417 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 13:55:45.782439 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 13:55:45.783086 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 13:55:45.815352 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 13:55:45.846541 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 13:55:45.880232 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 13:55:45.924466 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 13:55:45.962160 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 13:55:45.999510 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 13:55:46.034971 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 13:55:46.068482 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 13:55:46.099803 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 13:55:46.121270 1815551 ssh_runner.go:195] Run: openssl version
	I1013 13:55:46.128266 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 13:55:46.142449 1815551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148226 1815551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148313 1815551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.155940 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 13:55:46.170023 1815551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 13:55:46.175480 1815551 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 13:55:46.175554 1815551 kubeadm.go:400] StartCluster: {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:46.175652 1815551 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 13:55:46.175759 1815551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 13:55:46.214377 1815551 cri.go:89] found id: ""
	I1013 13:55:46.214475 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 13:55:46.227534 1815551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 13:55:46.239809 1815551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 13:55:46.253443 1815551 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 13:55:46.253466 1815551 kubeadm.go:157] found existing configuration files:
	
	I1013 13:55:46.253514 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 13:55:46.265630 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 13:55:46.265706 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 13:55:46.278450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 13:55:46.290243 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 13:55:46.290303 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 13:55:46.303207 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.315748 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 13:55:46.315819 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.328450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 13:55:46.340422 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 13:55:46.340491 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 13:55:46.353088 1815551 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 13:55:46.409861 1815551 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 13:55:46.409939 1815551 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 13:55:46.510451 1815551 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 13:55:46.510548 1815551 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 13:55:46.510736 1815551 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 13:55:46.519844 1815551 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 13:55:46.532700 1815551 out.go:252]   - Generating certificates and keys ...
	I1013 13:55:46.532819 1815551 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 13:55:46.532896 1815551 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 13:55:46.783435 1815551 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 13:55:47.020350 1815551 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 13:55:47.775782 1815551 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 13:55:48.011804 1815551 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 13:55:48.461103 1815551 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 13:55:48.461301 1815551 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.750774 1815551 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 13:55:48.751132 1815551 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.831944 1815551 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 13:55:49.085300 1815551 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 13:55:49.215416 1815551 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 13:55:49.215485 1815551 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 13:55:49.341619 1815551 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 13:55:49.552784 1815551 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 13:55:49.595942 1815551 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 13:55:49.670226 1815551 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 13:55:49.887570 1815551 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 13:55:49.888048 1815551 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 13:55:49.890217 1815551 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 13:55:49.891956 1815551 out.go:252]   - Booting up control plane ...
	I1013 13:55:49.892075 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 13:55:49.892175 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 13:55:49.892283 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 13:55:49.915573 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 13:55:49.915702 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 13:55:49.926506 1815551 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 13:55:49.926635 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 13:55:49.926699 1815551 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 13:55:50.104649 1815551 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 13:55:50.104896 1815551 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 13:55:51.105517 1815551 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001950535s
	I1013 13:55:51.110678 1815551 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 13:55:51.110781 1815551 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.214:8443/livez
	I1013 13:55:51.110862 1815551 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 13:55:51.110934 1815551 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 13:55:53.698826 1815551 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.589717518s
	I1013 13:55:54.571486 1815551 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.462849107s
	I1013 13:55:56.609645 1815551 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502421023s
	I1013 13:55:56.625086 1815551 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 13:55:56.642185 1815551 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 13:55:56.660063 1815551 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 13:55:56.660353 1815551 kubeadm.go:318] [mark-control-plane] Marking the node addons-214022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 13:55:56.677664 1815551 kubeadm.go:318] [bootstrap-token] Using token: yho7iw.8cmp1omdihpr13ia
	I1013 13:55:56.680503 1815551 out.go:252]   - Configuring RBAC rules ...
	I1013 13:55:56.680644 1815551 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 13:55:56.691921 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 13:55:56.701832 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 13:55:56.706581 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 13:55:56.711586 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 13:55:56.720960 1815551 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 13:55:57.019012 1815551 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 13:55:57.510749 1815551 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 13:55:58.017894 1815551 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 13:55:58.019641 1815551 kubeadm.go:318] 
	I1013 13:55:58.019746 1815551 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 13:55:58.019759 1815551 kubeadm.go:318] 
	I1013 13:55:58.019856 1815551 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 13:55:58.019866 1815551 kubeadm.go:318] 
	I1013 13:55:58.019906 1815551 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 13:55:58.019991 1815551 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 13:55:58.020075 1815551 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 13:55:58.020087 1815551 kubeadm.go:318] 
	I1013 13:55:58.020135 1815551 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 13:55:58.020180 1815551 kubeadm.go:318] 
	I1013 13:55:58.020272 1815551 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 13:55:58.020284 1815551 kubeadm.go:318] 
	I1013 13:55:58.020355 1815551 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 13:55:58.020465 1815551 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 13:55:58.020560 1815551 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 13:55:58.020570 1815551 kubeadm.go:318] 
	I1013 13:55:58.020696 1815551 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 13:55:58.020841 1815551 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 13:55:58.020863 1815551 kubeadm.go:318] 
	I1013 13:55:58.021022 1815551 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021178 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa \
	I1013 13:55:58.021220 1815551 kubeadm.go:318] 	--control-plane 
	I1013 13:55:58.021227 1815551 kubeadm.go:318] 
	I1013 13:55:58.021356 1815551 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 13:55:58.021366 1815551 kubeadm.go:318] 
	I1013 13:55:58.021480 1815551 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021613 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa 
	I1013 13:55:58.023899 1815551 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 13:55:58.023930 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:58.023940 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:58.026381 1815551 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 13:55:58.028311 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 13:55:58.043778 1815551 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 13:55:58.076261 1815551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 13:55:58.076355 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.076389 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214022 minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=addons-214022 minikube.k8s.io/primary=true
	I1013 13:55:58.125421 1815551 ops.go:34] apiserver oom_adj: -16
	I1013 13:55:58.249972 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.750645 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.250461 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.750623 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.250758 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.750903 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.250112 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.750238 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.250999 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.377634 1815551 kubeadm.go:1113] duration metric: took 4.301363742s to wait for elevateKubeSystemPrivileges
	I1013 13:56:02.377670 1815551 kubeadm.go:402] duration metric: took 16.202122758s to StartCluster
	I1013 13:56:02.377691 1815551 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.377852 1815551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:56:02.378374 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.378641 1815551 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:56:02.378701 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 13:56:02.378727 1815551 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 13:56:02.378856 1815551 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214022"
	I1013 13:56:02.378871 1815551 addons.go:69] Setting yakd=true in profile "addons-214022"
	I1013 13:56:02.378888 1815551 addons.go:238] Setting addon yakd=true in "addons-214022"
	I1013 13:56:02.378915 1815551 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:02.378924 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378926 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.378954 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378945 1815551 addons.go:69] Setting default-storageclass=true in profile "addons-214022"
	I1013 13:56:02.378942 1815551 addons.go:69] Setting gcp-auth=true in profile "addons-214022"
	I1013 13:56:02.378975 1815551 addons.go:69] Setting cloud-spanner=true in profile "addons-214022"
	I1013 13:56:02.378978 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214022"
	I1013 13:56:02.378963 1815551 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.378988 1815551 mustload.go:65] Loading cluster: addons-214022
	I1013 13:56:02.378999 1815551 addons.go:69] Setting registry=true in profile "addons-214022"
	I1013 13:56:02.379046 1815551 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214022"
	I1013 13:56:02.379058 1815551 addons.go:238] Setting addon registry=true in "addons-214022"
	I1013 13:56:02.379079 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379103 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379214 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.379427 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.378987 1815551 addons.go:238] Setting addon cloud-spanner=true in "addons-214022"
	I1013 13:56:02.379425 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379478 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379483 1815551 addons.go:69] Setting storage-provisioner=true in profile "addons-214022"
	I1013 13:56:02.379488 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379497 1815551 addons.go:238] Setting addon storage-provisioner=true in "addons-214022"
	I1013 13:56:02.379503 1815551 addons.go:69] Setting ingress=true in profile "addons-214022"
	I1013 13:56:02.379519 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379522 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379532 1815551 addons.go:69] Setting ingress-dns=true in profile "addons-214022"
	I1013 13:56:02.379546 1815551 addons.go:238] Setting addon ingress-dns=true in "addons-214022"
	I1013 13:56:02.379575 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379616 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379653 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379682 1815551 addons.go:69] Setting volumesnapshots=true in profile "addons-214022"
	I1013 13:56:02.379814 1815551 addons.go:238] Setting addon volumesnapshots=true in "addons-214022"
	I1013 13:56:02.379879 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379926 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379490 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379965 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379979 1815551 addons.go:69] Setting metrics-server=true in profile "addons-214022"
	I1013 13:56:02.379992 1815551 addons.go:238] Setting addon metrics-server=true in "addons-214022"
	I1013 13:56:02.380013 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379520 1815551 addons.go:238] Setting addon ingress=true in "addons-214022"
	I1013 13:56:02.379924 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380064 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380076 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380107 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380112 1815551 addons.go:69] Setting inspektor-gadget=true in profile "addons-214022"
	I1013 13:56:02.380125 1815551 addons.go:238] Setting addon inspektor-gadget=true in "addons-214022"
	I1013 13:56:02.380158 1815551 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.380221 1815551 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214022"
	I1013 13:56:02.380272 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380445 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380510 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379699 1815551 addons.go:69] Setting volcano=true in profile "addons-214022"
	I1013 13:56:02.380559 1815551 addons.go:238] Setting addon volcano=true in "addons-214022"
	I1013 13:56:02.380613 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380634 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380666 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380790 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380832 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380876 1815551 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214022"
	I1013 13:56:02.380894 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214022"
	I1013 13:56:02.379472 1815551 addons.go:69] Setting registry-creds=true in profile "addons-214022"
	I1013 13:56:02.381003 1815551 addons.go:238] Setting addon registry-creds=true in "addons-214022"
	I1013 13:56:02.381112 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.381265 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.381293 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.381341 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.382020 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.382057 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.382817 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.383259 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.383291 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384195 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384256 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384286 1815551 out.go:179] * Verifying Kubernetes components...
	I1013 13:56:02.384291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.384732 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384782 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.387093 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:56:02.392106 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.392163 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.396083 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.396162 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.410131 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I1013 13:56:02.411431 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1013 13:56:02.412218 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.412918 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.412942 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.413748 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.414498 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.415229 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.415286 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.415822 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.415843 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.420030 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I1013 13:56:02.420041 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I1013 13:56:02.420259 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I1013 13:56:02.420298 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I1013 13:56:02.420346 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.420406 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I1013 13:56:02.420930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421041 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421071 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.421170 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421581 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421600 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421753 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421769 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421819 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421832 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.422190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422264 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422931 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.422976 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.423789 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.424161 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.424211 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.427224 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I1013 13:56:02.427390 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38041
	I1013 13:56:02.427782 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.427837 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428131 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.428460 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.428533 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.428569 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428840 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.429601 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429621 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.429774 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429786 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.430349 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430508 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.430777 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430880 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431609 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.431937 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.431967 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431989 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.432062 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432169 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432395 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.432603 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.432771 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.433653 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.433706 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.433998 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.434042 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.434547 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32821
	I1013 13:56:02.441970 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1013 13:56:02.442071 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I1013 13:56:02.442458 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.442810 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.443536 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443557 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.443796 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443813 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.444423 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.444487 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.445199 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.445303 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.445921 1815551 addons.go:238] Setting addon default-storageclass=true in "addons-214022"
	I1013 13:56:02.445974 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.446387 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.446430 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.447853 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1013 13:56:02.447930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448413 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448652 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.448673 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449317 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.449355 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449911 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450071 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450759 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.450802 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.452824 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1013 13:56:02.453268 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.453309 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.453388 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.453909 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.453944 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.454377 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.454945 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.455002 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.457726 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41047
	I1013 13:56:02.458946 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42105
	I1013 13:56:02.459841 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.460456 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.460471 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.460997 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.461059 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.461190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.461893 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.462087 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.463029 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I1013 13:56:02.463622 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.464283 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.464301 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.467881 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.468766 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1013 13:56:02.468880 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.470158 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.470767 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.470785 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.471160 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
	I1013 13:56:02.471380 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.471463 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.471745 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.472514 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I1013 13:56:02.474011 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.474407 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.475349 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.475371 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.475936 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.477228 1815551 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214022"
	I1013 13:56:02.477291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.477704 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.477781 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.478540 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.478577 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.479296 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.479320 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.479338 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 13:56:02.479831 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.481287 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.482030 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.482191 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 13:56:02.482988 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I1013 13:56:02.482206 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.483218 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.483796 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.484400 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.484415 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.485053 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485131 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485219 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 13:56:02.485513 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.485624 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.488111 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 13:56:02.489703 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 13:56:02.490084 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1013 13:56:02.490663 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.490763 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.491660 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I1013 13:56:02.491817 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492275 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.492498 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.492417 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.492699 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492943 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 13:56:02.493252 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.493468 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.493280 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1013 13:56:02.494093 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.494695 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.495079 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.495408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.497771 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 13:56:02.498011 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.499118 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 13:56:02.499863 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35551
	I1013 13:56:02.500453 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.500464 1815551 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 13:56:02.500482 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.501046 1815551 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:02.501426 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 13:56:02.501453 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502344 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 13:56:02.502360 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 13:56:02.502380 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502511 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:02.502523 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 13:56:02.502539 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502551 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.503704 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 13:56:02.504519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.504549 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.504971 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1013 13:56:02.505044 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34753
	I1013 13:56:02.505476 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.505935 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.506132 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.506402 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 13:56:02.506420 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 13:56:02.506441 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.507553 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.507571 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.510588 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1013 13:56:02.511014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.512055 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.513064 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I1013 13:56:02.513461 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I1013 13:56:02.513806 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I1013 13:56:02.514065 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514237 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46643
	I1013 13:56:02.514353 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514506 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.514833 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.515238 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.515280 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.515776 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.516060 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516139 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516152 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516158 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516229 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I1013 13:56:02.516543 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.516614 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.516690 1815551 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 13:56:02.517007 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.517014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517062 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517467 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.517483 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.517559 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.517562 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
	I1013 13:56:02.518311 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:02.518369 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 13:56:02.518393 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.518516 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.518540 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.518655 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519402 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519519 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.519628 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.519763 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.519831 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521182 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.521199 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1013 13:56:02.521204 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521239 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521254 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.521455 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521645 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.521859 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.522155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.522227 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.525058 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.526886 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.526989 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.527062 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.527172 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.527481 1815551 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:02.527499 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1013 13:56:02.527538 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.527916 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528591 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.530285 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530450 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528734 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530629 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.530633 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528801 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528997 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529220 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I1013 13:56:02.529385 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529699 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.530894 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.530917 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.531013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.529988 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.531257 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531828 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.532069 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.532264 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.532540 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.532554 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531749 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.533563 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 13:56:02.533622 1815551 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 13:56:02.533679 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535465 1815551 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 13:56:02.533809 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I1013 13:56:02.533885 1815551 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 13:56:02.533999 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.534123 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.534155 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535733 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.535024 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.536159 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.536202 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.536302 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.537059 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.537168 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537279 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I1013 13:56:02.537305 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 13:56:02.537322 1815551 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 13:56:02.537342 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.537456 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.537805 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537934 1815551 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:02.537945 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 13:56:02.537970 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538046 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 13:56:02.538056 1815551 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 13:56:02.538070 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538169 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.538186 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.538982 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:02.539022 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 13:56:02.539053 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.540639 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.541678 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.541498 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.541528 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.542401 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.542692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.541543 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.542639 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.542646 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.542566 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.543500 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.544260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.545374 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.545773 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.546359 1815551 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 13:56:02.546363 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 13:56:02.546634 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.546830 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I1013 13:56:02.547953 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.547975 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.548147 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.548267 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.548438 1815551 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:02.548451 1815551 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 13:56:02.548473 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548649 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 13:56:02.548665 1815551 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 13:56:02.548684 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548741 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.548751 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.548789 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 13:56:02.549764 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549774 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.549766 1815551 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 13:56:02.549808 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.549138 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549891 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549914 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549939 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.550519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.550541 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.550650 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551438 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551458 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.551469 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.551478 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551613 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551695 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.551911 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.551979 1815551 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 13:56:02.552033 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552921 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.552947 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.552922 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.552965 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.553027 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553037 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.553282 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.553338 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553396 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.553415 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553448 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553810 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.554101 1815551 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:02.554108 1815551 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 13:56:02.554116 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 13:56:02.554188 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.555708 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:02.555861 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 13:56:02.555886 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555860 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.555999 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.556383 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.556783 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.557013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.557193 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.558058 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.558134 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.559028 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.559068 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.559315 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.559492 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.559902 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.560012 1815551 out.go:179]   - Using image docker.io/busybox:stable
	I1013 13:56:02.560174 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.560282 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560454 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560952 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561186 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561489 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561738 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561760 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561891 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561942 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562049 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562133 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562208 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562304 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.562325 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.562663 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.562854 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.563028 1815551 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 13:56:02.563073 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.563249 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.564627 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:02.564650 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 13:56:02.564672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.568502 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569018 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.569056 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569235 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.569424 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.569582 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.569725 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:03.342481 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:56:03.342511 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 13:56:03.415927 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:03.502503 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:03.509312 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:03.553702 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 13:56:03.553739 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 13:56:03.554436 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 13:56:03.554458 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 13:56:03.558285 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 13:56:03.558305 1815551 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 13:56:03.648494 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:03.699103 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:03.779563 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:03.812678 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 13:56:03.812733 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 13:56:03.829504 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:03.832700 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:03.897242 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 13:56:03.897268 1815551 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 13:56:03.905550 1815551 node_ready.go:35] waiting up to 6m0s for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909125 1815551 node_ready.go:49] node "addons-214022" is "Ready"
	I1013 13:56:03.909165 1815551 node_ready.go:38] duration metric: took 3.564505ms for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909180 1815551 api_server.go:52] waiting for apiserver process to appear ...
	I1013 13:56:03.909241 1815551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 13:56:03.957336 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:04.136232 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:04.201240 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 13:56:04.201271 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 13:56:04.228704 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 13:56:04.228758 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 13:56:04.287683 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.287738 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 13:56:04.507887 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:04.507919 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 13:56:04.641317 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 13:56:04.641349 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 13:56:04.710332 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 13:56:04.710378 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 13:56:04.712723 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 13:56:04.712755 1815551 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 13:56:04.822157 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.887676 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:04.887707 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 13:56:04.968928 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:05.069666 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 13:56:05.069709 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 13:56:05.164254 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 13:56:05.164289 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 13:56:05.171441 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 13:56:05.171470 1815551 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 13:56:05.278956 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:05.595927 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 13:56:05.595960 1815551 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 13:56:05.703182 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 13:56:05.703221 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 13:56:05.763510 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:05.763544 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 13:56:06.065261 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:06.086528 1815551 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.086558 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 13:56:06.241763 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 13:56:06.241791 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 13:56:06.468347 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.948294 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 13:56:06.948335 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 13:56:07.247516 1815551 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.904962804s)
	I1013 13:56:07.247565 1815551 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1013 13:56:07.247597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.83162272s)
	I1013 13:56:07.247662 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.247685 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248180 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248198 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.248211 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.248221 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248546 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:07.248628 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248648 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.509546 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 13:56:07.509581 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 13:56:07.797697 1815551 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214022" context rescaled to 1 replicas
	I1013 13:56:08.114046 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 13:56:08.114078 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 13:56:08.819818 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:08.819848 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 13:56:08.894448 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:09.954565 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 13:56:09.954611 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:09.959281 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.959849 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:09.959886 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.960116 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:09.960364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:09.960569 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:09.960746 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:10.901573 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 13:56:11.367882 1815551 addons.go:238] Setting addon gcp-auth=true in "addons-214022"
	I1013 13:56:11.367958 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:11.368474 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.368530 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.384151 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I1013 13:56:11.384793 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.385376 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.385403 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.385815 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.386578 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.386622 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.401901 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I1013 13:56:11.402499 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.403178 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.403201 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.403629 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.403840 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:11.405902 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:11.406201 1815551 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 13:56:11.406233 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:11.409331 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409779 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:11.409810 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409983 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:11.410205 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:11.410408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:11.410637 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:13.559421 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.0568709s)
	I1013 13:56:13.559481 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559478 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (10.050128857s)
	I1013 13:56:13.559507 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.910967928s)
	I1013 13:56:13.559530 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559544 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559553 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.860416384s)
	I1013 13:56:13.559562 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559571 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559579 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559619 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.780022659s)
	I1013 13:56:13.559648 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559663 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559691 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.726948092s)
	I1013 13:56:13.559546 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559707 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559728 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559764 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.730231108s)
	I1013 13:56:13.559493 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559784 1815551 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (9.650528788s)
	I1013 13:56:13.559801 1815551 api_server.go:72] duration metric: took 11.181129031s to wait for apiserver process to appear ...
	I1013 13:56:13.559808 1815551 api_server.go:88] waiting for apiserver healthz status ...
	I1013 13:56:13.559830 1815551 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I1013 13:56:13.559992 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560020 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560048 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560055 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560063 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560071 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560072 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560083 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560090 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560098 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559785 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560331 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560332 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560338 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560345 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560391 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560394 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560400 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560407 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560410 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560412 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560425 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560447 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560450 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560456 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560461 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560464 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560467 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560491 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560508 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560613 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560624 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560903 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560967 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560976 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560987 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560995 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.561056 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561078 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561085 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561188 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561210 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561237 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561243 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561445 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561462 1815551 addons.go:479] Verifying addon ingress=true in "addons-214022"
	I1013 13:56:13.561689 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561732 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561739 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563431 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.563516 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563493 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.564138 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.564155 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.564164 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.566500 1815551 out.go:179] * Verifying ingress addon...
	I1013 13:56:13.568872 1815551 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 13:56:13.679959 1815551 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I1013 13:56:13.701133 1815551 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 13:56:13.701173 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:13.713292 1815551 api_server.go:141] control plane version: v1.34.1
	I1013 13:56:13.713342 1815551 api_server.go:131] duration metric: took 153.525188ms to wait for apiserver health ...
	I1013 13:56:13.713357 1815551 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 13:56:13.839550 1815551 system_pods.go:59] 15 kube-system pods found
	I1013 13:56:13.839596 1815551 system_pods.go:61] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:13.839608 1815551 system_pods.go:61] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839614 1815551 system_pods.go:61] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839621 1815551 system_pods.go:61] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:13.839626 1815551 system_pods.go:61] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:13.839631 1815551 system_pods.go:61] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:13.839643 1815551 system_pods.go:61] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:13.839649 1815551 system_pods.go:61] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:13.839655 1815551 system_pods.go:61] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:13.839662 1815551 system_pods.go:61] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:13.839676 1815551 system_pods.go:61] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:13.839684 1815551 system_pods.go:61] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:13.839690 1815551 system_pods.go:61] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:13.839698 1815551 system_pods.go:61] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:13.839701 1815551 system_pods.go:61] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:13.839708 1815551 system_pods.go:74] duration metric: took 126.345191ms to wait for pod list to return data ...
	I1013 13:56:13.839738 1815551 default_sa.go:34] waiting for default service account to be created ...
	I1013 13:56:13.942067 1815551 default_sa.go:45] found service account: "default"
	I1013 13:56:13.942106 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.942111 1815551 default_sa.go:55] duration metric: took 102.363552ms for default service account to be created ...
	I1013 13:56:13.942129 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.942130 1815551 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 13:56:13.942465 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.942473 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.942485 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:14.047220 1815551 system_pods.go:86] 15 kube-system pods found
	I1013 13:56:14.047259 1815551 system_pods.go:89] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:14.047272 1815551 system_pods.go:89] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047280 1815551 system_pods.go:89] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047291 1815551 system_pods.go:89] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:14.047297 1815551 system_pods.go:89] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:14.047303 1815551 system_pods.go:89] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:14.047311 1815551 system_pods.go:89] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:14.047316 1815551 system_pods.go:89] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:14.047323 1815551 system_pods.go:89] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:14.047333 1815551 system_pods.go:89] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:14.047343 1815551 system_pods.go:89] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:14.047360 1815551 system_pods.go:89] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:14.047368 1815551 system_pods.go:89] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:14.047377 1815551 system_pods.go:89] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:14.047386 1815551 system_pods.go:89] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:14.047403 1815551 system_pods.go:126] duration metric: took 105.264628ms to wait for k8s-apps to be running ...
	I1013 13:56:14.047417 1815551 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 13:56:14.047478 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 13:56:14.113581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:14.930679 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.130040 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.620233 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.296801 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.658297 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.084581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.640914 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.131818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.760793 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.821597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.86421149s)
	I1013 13:56:18.821631 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.685366971s)
	I1013 13:56:18.821668 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821748 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821787 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821872 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.9996555s)
	W1013 13:56:18.821914 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821934 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.852967871s)
	I1013 13:56:18.821959 1815551 retry.go:31] will retry after 212.802499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821975 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821989 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822111 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.543120613s)
	I1013 13:56:18.822130 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822146 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822157 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822250 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822256 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822259 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822273 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822291 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822289 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822274 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.756980139s)
	I1013 13:56:18.822314 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822260 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822299 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822334 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822345 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822325 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822357 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822331 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822386 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822394 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.354009404s)
	W1013 13:56:18.822426 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822447 1815551 retry.go:31] will retry after 341.080561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822631 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822646 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822660 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822666 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822674 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822684 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822691 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822702 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822726 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822801 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822818 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822890 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.928381136s)
	I1013 13:56:18.822936 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822947 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823037 1815551 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.416805726s)
	I1013 13:56:18.822701 1815551 addons.go:479] Verifying addon registry=true in "addons-214022"
	I1013 13:56:18.823408 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823442 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823449 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823457 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.823463 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823529 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823549 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823554 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823563 1815551 addons.go:479] Verifying addon metrics-server=true in "addons-214022"
	I1013 13:56:18.823922 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823939 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823978 1815551 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.776478568s)
	I1013 13:56:18.826440 1815551 system_svc.go:56] duration metric: took 4.779015598s WaitForService to wait for kubelet
	I1013 13:56:18.826457 1815551 kubeadm.go:586] duration metric: took 16.447782815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:56:18.826480 1815551 node_conditions.go:102] verifying NodePressure condition ...
	I1013 13:56:18.824018 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.824271 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.826526 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.826549 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.826556 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.826909 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:18.827041 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.827056 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.827324 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.827349 1815551 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:18.827631 1815551 out.go:179] * Verifying registry addon...
	I1013 13:56:18.827639 1815551 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214022 service yakd-dashboard -n yakd-dashboard
	
	I1013 13:56:18.828579 1815551 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 13:56:18.830389 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 13:56:18.830649 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 13:56:18.831072 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 13:56:18.831622 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 13:56:18.831641 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 13:56:18.904373 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 13:56:18.904404 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 13:56:18.958203 1815551 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 13:56:18.958240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:18.968879 1815551 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 13:56:18.968905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:18.980574 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:18.980605 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 13:56:18.989659 1815551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 13:56:18.989692 1815551 node_conditions.go:123] node cpu capacity is 2
	I1013 13:56:18.989704 1815551 node_conditions.go:105] duration metric: took 163.213438ms to run NodePressure ...
	I1013 13:56:18.989726 1815551 start.go:241] waiting for startup goroutines ...
	I1013 13:56:19.035462 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:19.044517 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:19.044541 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:19.044887 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:19.044920 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:19.044937 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:19.076791 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:19.115345 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.164325 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:19.492227 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.492514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:19.578775 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.860209 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.860435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.075338 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.338880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.339590 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.591872 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.839272 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.840410 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.147212 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.341334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:21.342792 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.576751 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.816476 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.780960002s)
	W1013 13:56:21.816548 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816583 1815551 retry.go:31] will retry after 241.635364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816594 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.739753765s)
	I1013 13:56:21.816659 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.816682 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.652313132s)
	I1013 13:56:21.816724 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816742 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817049 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817064 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817072 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817094 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817135 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817206 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817222 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817231 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817240 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817331 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817362 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817373 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817637 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.820100 1815551 addons.go:479] Verifying addon gcp-auth=true in "addons-214022"
	I1013 13:56:21.822251 1815551 out.go:179] * Verifying gcp-auth addon...
	I1013 13:56:21.824621 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 13:56:21.835001 1815551 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 13:56:21.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:21.838795 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.840850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.059249 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:22.077627 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.330307 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.336339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.337042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:22.574406 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.832108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.838566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.838826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 13:56:22.914754 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:22.914802 1815551 retry.go:31] will retry after 760.892054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:23.073359 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.329443 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.336062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:23.336518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.576107 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.676911 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:23.852063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.852111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.852394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.075386 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:24.331600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.340818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:24.343374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.572818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:24.620054 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.620094 1815551 retry.go:31] will retry after 1.157322101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.831852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.836023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.836880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.073842 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.328390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.335179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:25.337258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.650194 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.777621 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:25.840280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.846148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.847000 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.073966 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:26.329927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.335473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.335806 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.575967 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:26.717807 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.717838 1815551 retry.go:31] will retry after 1.353453559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.828801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.834019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.836503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.073185 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.329339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.337730 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.338165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.576514 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.828768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.835828 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.836163 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.071440 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:28.372264 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.372321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.373313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:28.374357 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.576799 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.830178 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.839906 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.841861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 13:56:29.026067 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.026119 1815551 retry.go:31] will retry after 2.314368666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.075636 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.331372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.334421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:29.336311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.574567 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.828489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.836190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.836214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.073854 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.328358 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.337153 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:30.572800 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.829360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.836930 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.838278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.115447 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.341310 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:31.386485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.389205 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:31.390131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.594587 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.838151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.859495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.859525 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.074372 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.329175 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.337700 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.340721 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.450731 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109365647s)
	W1013 13:56:32.450775 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.450795 1815551 retry.go:31] will retry after 3.150290355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.578006 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.830600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.835361 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.837984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.072132 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.330611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.336957 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.338768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:33.576304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.832311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.837282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.839687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.073260 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.328435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.335455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.338454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:34.573208 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.829194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.836540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.838519 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.073549 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.329626 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:35.336677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.573553 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.601692 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:35.833491 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.847288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.853015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.073279 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.332575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.339486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.345783 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.575174 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.831613 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.838390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.839346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.873620 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.271867515s)
	W1013 13:56:36.873678 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:36.873707 1815551 retry.go:31] will retry after 2.895058592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:37.073691 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.328849 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.335191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.337850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:37.572952 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.830399 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.834346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.835091 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.074246 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.329068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.334746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:38.336761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.574900 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.829389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.836693 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.838345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.073278 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.329302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.339598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.340006 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:39.572295 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.769464 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:39.829653 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.836342 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.836508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.073770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.329739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.334329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.336269 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.691416 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.831148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.837541 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.839843 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.983908 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.214399822s)
	W1013 13:56:40.983958 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:40.983985 1815551 retry.go:31] will retry after 7.225185704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:41.073163 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.329997 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.335409 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.338433 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:41.666422 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.829493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.835176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.835834 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.072985 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.330254 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.339275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.343430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.574234 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.831039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.835619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.838197 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.072757 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.328191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.337547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.337556 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.573563 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.840684 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.842458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.848748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.073791 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.328352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.335902 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.337655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:44.575764 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.834421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.839189 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.844388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.073743 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.328774 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.336100 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:45.336438 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.601555 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.830165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.835830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.838487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.074421 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.328961 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.334499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.335387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:46.574665 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.829543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.835535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.837472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.076871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.328763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.335050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:47.337454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.572647 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.829879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.834618 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.837273 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.082833 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.210068 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:48.329748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.336813 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.339418 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.577288 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.957818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.960308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.964374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.076388 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.310522 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.100404712s)
	W1013 13:56:49.310569 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.310590 1815551 retry.go:31] will retry after 8.278511579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.333318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.335452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.338043 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.577394 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.830452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.835251 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.837381 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.073417 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.329558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:50.339077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.574733 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.830760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.835530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.077542 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.331547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.335448 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:51.336576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.572984 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.829083 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.837328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.072950 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.329542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.335485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.335539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.572971 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.828509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.836901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.837310 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.074048 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.333265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.335372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.336434 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.574864 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.830933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.838072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.839851 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.074866 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.338983 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.339799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:54.344377 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.574702 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.828114 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.837122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.074420 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:55.329544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:55.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.336305 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:55.578331 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.005987 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.006040 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.008625 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.083827 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.328560 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.335079 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.335136 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.575579 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.830373 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.835033 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.835179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.087195 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.332845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.337372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.338029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.576538 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.589639 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:57.830334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.836937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.838662 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.112247 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.336059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.348974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.350146 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.573280 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.842857 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.842873 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.842888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.924998 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.335308989s)
	W1013 13:56:58.925066 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:58.925097 1815551 retry.go:31] will retry after 13.924020767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:59.072616 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.329181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.335127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.335993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:59.575343 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.830551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.836400 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.837278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.078387 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.333707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.375230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:00.376823 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.572444 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.829334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.835575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.835799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.079304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.330385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.335250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.581487 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.837221 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.837449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.078263 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:02.330056 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:02.339092 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.339093 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:02.577091 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.077029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.077446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.077527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.154987 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.328809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.335973 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.336466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.574053 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.832304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.836898 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.837250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.072871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.329704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.335648 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:04.573740 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.828297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.838545 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.839359 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.073273 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.331167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.337263 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:05.339875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.572747 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.831331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.842003 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.930357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.076706 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.328910 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.336063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.343356 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:06.584114 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.830148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.835936 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.837800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.073829 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.332895 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.335938 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:07.336485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.573658 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.829535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.834609 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.841665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.077534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.328984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.333490 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.335036 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.574315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.830309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.838864 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.075894 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.330037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.335138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.336913 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:09.572525 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.828315 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.835125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.835169 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.074415 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.330449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.334152 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.338372 1815551 kapi.go:107] duration metric: took 51.507291615s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 13:57:10.573600 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.829312 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.834624 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.073690 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.329540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.334164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.575859 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.829406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.834682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.073929 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.328430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.335019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.574762 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.828887 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.833318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.849353 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:13.075935 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:13.329099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.336236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:13.573534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:57:13.587679 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.587745 1815551 retry.go:31] will retry after 13.672716628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.828261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.835435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.073229 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.328789 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.334388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.573428 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.829403 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.834752 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.074458 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.330167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.334526 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.573869 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.828247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.834508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.073598 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.329584 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.335058 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.573770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.834668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.073034 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.330112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.334151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.572834 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.827923 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.834428 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.074227 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.332800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.338122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.574366 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.829944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.835390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.073063 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.330933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.334816 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.578792 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.829059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.834174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.073867 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.328553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.335769 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.577315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.828820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.834111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.074340 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.348186 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.348277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.577133 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.828486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.835130 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.074094 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.329573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.336976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.576302 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.829112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.073276 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.332360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.574812 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.828888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.836976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.073895 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:24.329298 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.345232 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.573291 1815551 kapi.go:107] duration metric: took 1m11.00441945s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 13:57:24.829727 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.328687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.335809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.830863 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.833805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.829658 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.834781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.261314 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:27.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.335935 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.840969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.841226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.331295 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.336284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.567555 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.306188084s)
	W1013 13:57:28.567634 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:28.567738 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.567757 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568060 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568121 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568134 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:57:28.568150 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.568163 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568426 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568464 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568475 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	W1013 13:57:28.568614 1815551 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 13:57:28.828678 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.834833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.329605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:29.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.829667 1815551 kapi.go:107] duration metric: took 1m8.005042215s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 13:57:29.831603 1815551 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214022 cluster.
	I1013 13:57:29.832969 1815551 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 13:57:29.834368 1815551 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 13:57:29.835165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.335102 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.834820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.337927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.836162 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.334652 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.335329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.836940 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.335265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.835299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.334493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.835958 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.336901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.836037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.334865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.335331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.835376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.334760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.835451 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.335213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.835487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.334559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.835709 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.336510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.835078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.334427 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.835800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.836213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.835870 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.336474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.835258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.335636 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.335125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.835336 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.334300 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.834511 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.334734 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.336483 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.835357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.334098 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.834039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.336018 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.836261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.334061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.834919 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.334649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.835154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.336354 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.834937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.335025 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.835808 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.835220 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.335287 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.336327 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.836514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.835391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.335754 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.834954 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.337125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.836950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.335741 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.835238 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.334514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.836800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.335199 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.834223 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.334374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.834313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.335017 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.836739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.836138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.837760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.335601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.834423 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.335277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.334190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.835779 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.335566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.834803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.335076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.834352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.337145 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.836318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.335627 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.834879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.335150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.834450 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.335022 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.836226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.335160 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.836271 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.335103 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.335568 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.836839 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.335318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.836164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.334826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.835127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.336865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.836135 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.336673 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.835150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.334589 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.834578 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.335334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.835296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.335639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.836101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.334964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.835761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.335325 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.836391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.335041 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.836020 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.335603 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.834446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.336822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.835728 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.834134 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.335154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.836561 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.336212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.835791 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.335558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.835276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.335841 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.836019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.334744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.834701 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.335446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.835594 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.337105 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.834479 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.335535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.835194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.335256 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.336078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.835454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.335291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.835631 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.336375 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.835517 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.335533 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.835668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.334675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.836765 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.335738 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.835614 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.334992 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.834761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.835039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.335024 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.835393 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.335510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.834835 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.335247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.835193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.337646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.334671 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.835950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.335072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.835262 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.336068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.838250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.336473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.834196 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.835516 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.336890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.336117 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.835027 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.336076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.835382 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.334500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.835763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.335780 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.834829 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.335922 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.835807 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.335268 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.835042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.334861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.835742 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.835542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.336308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.834819 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.334458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.834430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.334302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.834698 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.335242 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.837355 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.334901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.835822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.335481 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.835077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.335379 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.835858 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.335030 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.334406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.835970 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.336845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.835639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.334566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.834610 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.335758 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.834181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.335230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.836521 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.335115 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.834296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.334011 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.835572 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.334655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.837467 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.334547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.835937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.335478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.334801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.834872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.834089 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.334927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.835775 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.334557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.834110 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.336120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.835608 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.338054 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.835852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.335214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.835500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.334478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.835206 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.335016 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.835509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.334080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.835482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.336619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.835408 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.334489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.834778 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.334764 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.836472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.834969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.335466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.336616 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.835557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.335389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.837280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.335407 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.835989 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.334416 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.336883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.835437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.334771 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.836376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.334601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.334699 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.834770 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.334874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.835696 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.335335 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.836061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.334551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.836309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.335167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.835702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.334763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.335505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.835798 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.335506 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.836329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.335321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.834801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.334908 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.835943 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.335962 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.836396 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.335654 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.835633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.335803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.835579 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.334633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.335151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.835600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.336050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.835564 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.335649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.335190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.834455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.334544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.835370 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.834672 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.334781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.834666 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.835748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.335284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.835158 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.337417 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.835644 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.335243 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.835634 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.335832 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.836076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.336097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.835499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.334133 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.334598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.835174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.335615 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.835346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.334875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.835362 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.335392 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.835890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.336384 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.835565 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.334702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.836069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.335345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.835340 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.338240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.836180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.336383 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.835503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.836328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.333988 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.335216 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.836465 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.836108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.336180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.836086 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.335099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.836475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.334621 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.334707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.835907 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.336386 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.834665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.334390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.333981 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.836628 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.335276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.334588 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.835824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.338905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.836639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.335704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.835552 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.334682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.335635 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.835001 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.334830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.834874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.336549 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.838494 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.335810 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.834944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.335374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.834675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.335833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.836291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.334291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.835818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.335302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.836497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.334553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.834695 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.335580 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.336475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.834974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.335889 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.835181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.336380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.336442 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.834531 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.335397 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.834456 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.337231 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.834677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.335412 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.336539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.835527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.835688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.335233 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.335877 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.836559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.335297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.837219 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.336121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.834649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.336482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.335108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.335574 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.335903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.835661 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.337729 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.835944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.834840 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.336497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.835735 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.336414 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.835122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.335039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.835080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.336069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.335177 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.835351 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.335126 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.835180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.835406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.334198 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.336224 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.836107 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.336440 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.336101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.835094 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.334705 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.836586 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.335865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.336836 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.836236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.334530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.836132 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.834953 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.336330 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.834343 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.334470 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.835865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.336394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.834746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.336193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.835282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.334495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.835755 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.835573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.335010 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.835070 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.337081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.836917 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.336075 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.836303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.335543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.336304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.835123 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.334303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.836073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.337121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.335474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.835147 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.834679 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.334975 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.835505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.335547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.834320 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.337072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.835338 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.334677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.835088 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.334605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.834688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.835956 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.336504 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.836995 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.335212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.834385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.335476 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.835502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.836012 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.335744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.834380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.335240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.835337 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.335893 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.834524 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.334081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.835413 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.334814 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.834505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.335015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.835005 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.336275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.835387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.335267 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.835234 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.335689 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.336968 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.835611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.835927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.337411 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.834441 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.335062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.835993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.336191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.831884 1815551 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1013 14:02:18.831927 1815551 kapi.go:107] duration metric: took 6m0.001279478s to wait for kubernetes.io/minikube-addons=registry ...
	W1013 14:02:18.832048 1815551 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1013 14:02:18.834028 1815551 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, default-storageclass, volcano, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, csi-hostpath-driver, ingress, gcp-auth
	I1013 14:02:18.835547 1815551 addons.go:514] duration metric: took 6m16.456841938s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin default-storageclass volcano metrics-server yakd storage-provisioner-rancher volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I1013 14:02:18.835619 1815551 start.go:246] waiting for cluster config update ...
	I1013 14:02:18.835653 1815551 start.go:255] writing updated cluster config ...
	I1013 14:02:18.835985 1815551 ssh_runner.go:195] Run: rm -f paused
	I1013 14:02:18.844672 1815551 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:18.850989 1815551 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.858822 1815551 pod_ready.go:94] pod "coredns-66bc5c9577-h4thg" is "Ready"
	I1013 14:02:18.858851 1815551 pod_ready.go:86] duration metric: took 7.830127ms for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.861510 1815551 pod_ready.go:83] waiting for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.866947 1815551 pod_ready.go:94] pod "etcd-addons-214022" is "Ready"
	I1013 14:02:18.866978 1815551 pod_ready.go:86] duration metric: took 5.438269ms for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.870108 1815551 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.876071 1815551 pod_ready.go:94] pod "kube-apiserver-addons-214022" is "Ready"
	I1013 14:02:18.876101 1815551 pod_ready.go:86] duration metric: took 5.952573ms for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.879444 1815551 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.250700 1815551 pod_ready.go:94] pod "kube-controller-manager-addons-214022" is "Ready"
	I1013 14:02:19.250743 1815551 pod_ready.go:86] duration metric: took 371.273475ms for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.452146 1815551 pod_ready.go:83] waiting for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.850363 1815551 pod_ready.go:94] pod "kube-proxy-m9kg9" is "Ready"
	I1013 14:02:19.850396 1815551 pod_ready.go:86] duration metric: took 398.220518ms for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.050567 1815551 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449725 1815551 pod_ready.go:94] pod "kube-scheduler-addons-214022" is "Ready"
	I1013 14:02:20.449765 1815551 pod_ready.go:86] duration metric: took 399.169231ms for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449779 1815551 pod_ready.go:40] duration metric: took 1.605053066s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:20.499765 1815551 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 14:02:20.501422 1815551 out.go:179] * Done! kubectl is now configured to use "addons-214022" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	4b9c2b1e8388b       56cc512116c8f       6 minutes ago       Running             busybox                                  0                   c2017033bd492       busybox
	d6a3c830fdead       1bec18b3728e7       17 minutes ago      Running             controller                               0                   b82d6ab22225e       ingress-nginx-controller-9cc49f96f-7jf8g
	dc9eac6946abb       738351fd438f0       17 minutes ago      Running             csi-snapshotter                          0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	caf59fa52cf6c       931dbfd16f87c       17 minutes ago      Running             csi-provisioner                          0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	dcdb3cedeedc5       e899260153aed       17 minutes ago      Running             liveness-probe                           0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	20320037960be       e255e073c508c       17 minutes ago      Running             hostpath                                 0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	251c9387cb3f1       88ef14a257f42       17 minutes ago      Running             node-driver-registrar                    0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	4bf53d30ff2bf       19a639eda60f0       17 minutes ago      Running             csi-resizer                              0                   38173b2da332e       csi-hostpath-resizer-0
	da92c998f6d36       a1ed5895ba635       17 minutes ago      Running             csi-external-health-monitor-controller   0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	fdb740423cae7       aa61ee9c70bc4       17 minutes ago      Running             volume-snapshot-controller               0                   d87f7092f76cb       snapshot-controller-7d9fbc56b8-fcqg8
	d9300160a8179       59cbb42146a37       17 minutes ago      Running             csi-attacher                             0                   1571308a93146       csi-hostpath-attacher-0
	59dcea13b91a7       aa61ee9c70bc4       17 minutes ago      Running             volume-snapshot-controller               0                   fc7a88bf2bbfa       snapshot-controller-7d9fbc56b8-pnqwn
	ac9ca79606b04       8c217da6734db       17 minutes ago      Exited              patch                                    0                   82e54969531ac       ingress-nginx-admission-patch-kvlpb
	fc2247488ceef       8c217da6734db       17 minutes ago      Exited              create                                   0                   249a7d7c465c4       ingress-nginx-admission-create-rn6ng
	ade8e5a3e89a5       38dca7434d5f2       18 minutes ago      Running             gadget                                   0                   cd47cb2e122c6       gadget-lrthv
	55e4c7d9441ba       b1c9f9ef5f0c2       18 minutes ago      Running             registry-proxy                           0                   dbfd8a2965678       registry-proxy-qdl2b
	11373ec0dad23       b6ab53fbfedaa       18 minutes ago      Running             minikube-ingress-dns                     0                   25d666aa48ee6       kube-ingress-dns-minikube
	61d2e3b41e535       6e38f40d628db       18 minutes ago      Running             storage-provisioner                      0                   c3fcdfcb3c777       storage-provisioner
	e93bcf6b41d34       d5e667c0f2bb6       18 minutes ago      Running             amd-gpu-device-plugin                    0                   dd63ea4bfdd23       amd-gpu-device-plugin-k6tpl
	836109d2ab5d3       52546a367cc9e       18 minutes ago      Running             coredns                                  0                   475cb9ba95a73       coredns-66bc5c9577-h4thg
	0daa3279505d6       fc25172553d79       18 minutes ago      Running             kube-proxy                               0                   85474e9f38355       kube-proxy-m9kg9
	05cee8f966b49       c80c8dbafe7dd       19 minutes ago      Running             kube-controller-manager                  0                   03c96ff8163c4       kube-controller-manager-addons-214022
	b4ca1f4c451a7       5f1f5298c888d       19 minutes ago      Running             etcd                                     0                   f69d756c4a41d       etcd-addons-214022
	84834930aaa27       7dd6aaa1717ab       19 minutes ago      Running             kube-scheduler                           0                   246bc566c0147       kube-scheduler-addons-214022
	da79537fc9aee       c3994bc696102       19 minutes ago      Running             kube-apiserver                           0                   6b21f01e5cdd5       kube-apiserver-addons-214022
	
	
	==> containerd <==
	Oct 13 14:14:02 addons-214022 containerd[816]: time="2025-10-13T14:14:02.038166071Z" level=warning msg="cleaning up after shim disconnected" id=f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3 namespace=k8s.io
	Oct 13 14:14:02 addons-214022 containerd[816]: time="2025-10-13T14:14:02.039080708Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 14:14:02 addons-214022 containerd[816]: time="2025-10-13T14:14:02.064652164Z" level=warning msg="cleanup warnings time=\"2025-10-13T14:14:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Oct 13 14:14:02 addons-214022 containerd[816]: time="2025-10-13T14:14:02.170627219Z" level=info msg="TearDown network for sandbox \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\" successfully"
	Oct 13 14:14:02 addons-214022 containerd[816]: time="2025-10-13T14:14:02.170735150Z" level=info msg="StopPodSandbox for \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\" returns successfully"
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.021963944Z" level=info msg="Kill container \"427e1841635f76945f72a19ebdbeffa6d3517e4aada722f31e175a1a20c5c345\""
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.065807495Z" level=info msg="shim disconnected" id=427e1841635f76945f72a19ebdbeffa6d3517e4aada722f31e175a1a20c5c345 namespace=k8s.io
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.065906329Z" level=warning msg="cleaning up after shim disconnected" id=427e1841635f76945f72a19ebdbeffa6d3517e4aada722f31e175a1a20c5c345 namespace=k8s.io
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.065915682Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.102136968Z" level=info msg="StopContainer for \"427e1841635f76945f72a19ebdbeffa6d3517e4aada722f31e175a1a20c5c345\" returns successfully"
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.103113394Z" level=info msg="StopPodSandbox for \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\""
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.103215535Z" level=info msg="Container to stop \"427e1841635f76945f72a19ebdbeffa6d3517e4aada722f31e175a1a20c5c345\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.156572742Z" level=info msg="shim disconnected" id=b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731 namespace=k8s.io
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.156693112Z" level=warning msg="cleaning up after shim disconnected" id=b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731 namespace=k8s.io
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.156704556Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.289566452Z" level=info msg="TearDown network for sandbox \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\" successfully"
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.289616083Z" level=info msg="StopPodSandbox for \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\" returns successfully"
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.325962803Z" level=info msg="RemoveContainer for \"427e1841635f76945f72a19ebdbeffa6d3517e4aada722f31e175a1a20c5c345\""
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.334780339Z" level=info msg="RemoveContainer for \"427e1841635f76945f72a19ebdbeffa6d3517e4aada722f31e175a1a20c5c345\" returns successfully"
	Oct 13 14:14:27 addons-214022 containerd[816]: time="2025-10-13T14:14:27.336609167Z" level=error msg="ContainerStatus for \"427e1841635f76945f72a19ebdbeffa6d3517e4aada722f31e175a1a20c5c345\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"427e1841635f76945f72a19ebdbeffa6d3517e4aada722f31e175a1a20c5c345\": not found"
	Oct 13 14:14:38 addons-214022 containerd[816]: time="2025-10-13T14:14:38.377186423Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Oct 13 14:14:38 addons-214022 containerd[816]: time="2025-10-13T14:14:38.380293733Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:14:38 addons-214022 containerd[816]: time="2025-10-13T14:14:38.479291104Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:14:38 addons-214022 containerd[816]: time="2025-10-13T14:14:38.569564325Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:14:38 addons-214022 containerd[816]: time="2025-10-13T14:14:38.569663145Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10965"
	
	
	==> coredns [836109d2ab5d3098ccc6f029d103e56da702d50a57e73f14a97ae3b019a5fa1c] <==
	[INFO] 10.244.0.8:56370 - 19493 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000241027s
	[INFO] 10.244.0.8:57860 - 31293 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000357706s
	[INFO] 10.244.0.8:57860 - 29104 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000102832s
	[INFO] 10.244.0.8:57860 - 41728 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000086359s
	[INFO] 10.244.0.8:57860 - 37507 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000273482s
	[INFO] 10.244.0.8:57860 - 41775 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00006762s
	[INFO] 10.244.0.8:57860 - 61193 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000083348s
	[INFO] 10.244.0.8:57860 - 4414 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000203136s
	[INFO] 10.244.0.8:57860 - 38466 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000066044s
	[INFO] 10.244.0.8:42571 - 11270 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00046035s
	[INFO] 10.244.0.8:42571 - 39048 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000749443s
	[INFO] 10.244.0.8:42571 - 11040 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000183412s
	[INFO] 10.244.0.8:42571 - 28972 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000179149s
	[INFO] 10.244.0.8:42571 - 4383 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000182192s
	[INFO] 10.244.0.8:42571 - 44910 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000092015s
	[INFO] 10.244.0.8:42571 - 27090 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000157955s
	[INFO] 10.244.0.8:42571 - 45275 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000323484s
	[INFO] 10.244.0.8:47771 - 24897 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000148398s
	[INFO] 10.244.0.8:47771 - 13774 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000233622s
	[INFO] 10.244.0.8:47771 - 15245 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084016s
	[INFO] 10.244.0.8:47771 - 49510 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000135871s
	[INFO] 10.244.0.8:47771 - 39380 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009801s
	[INFO] 10.244.0.8:47771 - 26219 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000122475s
	[INFO] 10.244.0.8:47771 - 9543 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000161762s
	[INFO] 10.244.0.8:47771 - 10258 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000325294s
	
	
	==> describe nodes <==
	Name:               addons-214022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=addons-214022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214022
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-214022"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 13:55:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:14:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-214022
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 c368161c275346d2a9ea3f8a7f4ac862
	  System UUID:                c368161c-2753-46d2-a9ea-3f8a7f4ac862
	  Boot ID:                    687454d4-3e74-47c7-85c1-524150a13269
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  gadget                      gadget-lrthv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-7jf8g    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         18m
	  kube-system                 amd-gpu-device-plugin-k6tpl                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-66bc5c9577-h4thg                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     18m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 csi-hostpathplugin-4jxqs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-addons-214022                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         18m
	  kube-system                 kube-apiserver-addons-214022                250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-addons-214022       200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-m9kg9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-addons-214022                100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 registry-66898fdd98-qpt8q                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 registry-proxy-qdl2b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 snapshot-controller-7d9fbc56b8-fcqg8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 snapshot-controller-7d9fbc56b8-pnqwn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                kubelet          Node addons-214022 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node addons-214022 event: Registered Node addons-214022 in Controller
	
	
	==> dmesg <==
	[  +0.188548] kauditd_printk_skb: 340 callbacks suppressed
	[ +10.023317] kauditd_printk_skb: 173 callbacks suppressed
	[ +11.926739] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.270838] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.901459] kauditd_printk_skb: 26 callbacks suppressed
	[Oct13 13:57] kauditd_printk_skb: 117 callbacks suppressed
	[  +1.255372] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.000037] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.136427] kauditd_printk_skb: 50 callbacks suppressed
	[  +4.193430] kauditd_printk_skb: 68 callbacks suppressed
	[Oct13 14:08] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000059] kauditd_printk_skb: 65 callbacks suppressed
	[ +12.058507] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000136] kauditd_printk_skb: 22 callbacks suppressed
	[Oct13 14:09] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.303382] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.474208] kauditd_printk_skb: 49 callbacks suppressed
	[Oct13 14:10] kauditd_printk_skb: 90 callbacks suppressed
	[Oct13 14:11] kauditd_printk_skb: 9 callbacks suppressed
	[ +15.690633] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.656333] kauditd_printk_skb: 21 callbacks suppressed
	[Oct13 14:13] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.000051] kauditd_printk_skb: 9 callbacks suppressed
	[Oct13 14:14] kauditd_printk_skb: 26 callbacks suppressed
	[ +24.933780] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [b4ca1f4c451a74c7ea64ca0e34512e160fbd260fd3969afb6e67fca08f49102b] <==
	{"level":"info","ts":"2025-10-13T13:57:03.066329Z","caller":"traceutil/trace.go:172","msg":"trace[1337303940] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"235.769671ms","start":"2025-10-13T13:57:02.830551Z","end":"2025-10-13T13:57:03.066321Z","steps":["trace[1337303940] 'range keys from in-memory index tree'  (duration: 235.56325ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.066781Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.221636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:03.066824Z","caller":"traceutil/trace.go:172","msg":"trace[1790166720] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"236.26612ms","start":"2025-10-13T13:57:02.830551Z","end":"2025-10-13T13:57:03.066818Z","steps":["trace[1790166720] 'range keys from in-memory index tree'  (duration: 236.097045ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315015Z","caller":"traceutil/trace.go:172","msg":"trace[940649486] linearizableReadLoop","detail":"{readStateIndex:1392; appliedIndex:1392; }","duration":"127.017691ms","start":"2025-10-13T13:57:23.187982Z","end":"2025-10-13T13:57:23.314999Z","steps":["trace[940649486] 'read index received'  (duration: 127.006943ms)","trace[940649486] 'applied index is now lower than readState.Index'  (duration: 4.937µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T13:57:23.315177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.178772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:23.315206Z","caller":"traceutil/trace.go:172","msg":"trace[2128069664] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:1356; }","duration":"127.222714ms","start":"2025-10-13T13:57:23.187978Z","end":"2025-10-13T13:57:23.315201Z","steps":["trace[2128069664] 'agreement among raft nodes before linearized reading'  (duration: 127.149155ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315263Z","caller":"traceutil/trace.go:172","msg":"trace[1733438696] transaction","detail":"{read_only:false; response_revision:1357; number_of_response:1; }","duration":"135.233261ms","start":"2025-10-13T13:57:23.180019Z","end":"2025-10-13T13:57:23.315253Z","steps":["trace[1733438696] 'process raft request'  (duration: 135.141996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:05:52.467650Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1907}
	{"level":"info","ts":"2025-10-13T14:05:52.575208Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1907,"took":"105.568434ms","hash":1304879421,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4886528,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2025-10-13T14:05:52.575710Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1304879421,"revision":1907,"compact-revision":-1}
	{"level":"info","ts":"2025-10-13T14:09:13.842270Z","caller":"traceutil/trace.go:172","msg":"trace[1885689359] linearizableReadLoop","detail":"{readStateIndex:3177; appliedIndex:3177; }","duration":"274.560471ms","start":"2025-10-13T14:09:13.567649Z","end":"2025-10-13T14:09:13.842209Z","steps":["trace[1885689359] 'read index received'  (duration: 274.551109ms)","trace[1885689359] 'applied index is now lower than readState.Index'  (duration: 8.253µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:13.906716Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.580668ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T14:09:13.906823Z","caller":"traceutil/trace.go:172","msg":"trace[1704629397] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2982; }","duration":"187.730839ms","start":"2025-10-13T14:09:13.719077Z","end":"2025-10-13T14:09:13.906808Z","steps":["trace[1704629397] 'range keys from in-memory index tree'  (duration: 187.538324ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T14:09:13.906973Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"339.314013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ipaddresses/10.101.151.157\" limit:1 ","response":"range_response_count:1 size:604"}
	{"level":"info","ts":"2025-10-13T14:09:13.907424Z","caller":"traceutil/trace.go:172","msg":"trace[692800306] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2983; }","duration":"346.864291ms","start":"2025-10-13T14:09:13.560497Z","end":"2025-10-13T14:09:13.907361Z","steps":["trace[692800306] 'process raft request'  (duration: 281.825137ms)","trace[692800306] 'compare'  (duration: 64.828079ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T14:09:13.907508Z","caller":"traceutil/trace.go:172","msg":"trace[107743050] range","detail":"{range_begin:/registry/ipaddresses/10.101.151.157; range_end:; response_count:1; response_revision:2982; }","duration":"339.484538ms","start":"2025-10-13T14:09:13.567635Z","end":"2025-10-13T14:09:13.907120Z","steps":["trace[107743050] 'agreement among raft nodes before linearized reading'  (duration: 274.852745ms)","trace[107743050] 'range keys from in-memory index tree'  (duration: 64.106294ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:13.907801Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.567617Z","time spent":"339.918526ms","remote":"127.0.0.1:33944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":627,"request content":"key:\"/registry/ipaddresses/10.101.151.157\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T14:09:13.908101Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.560488Z","time spent":"346.985335ms","remote":"127.0.0.1:33882","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":61,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" mod_revision:2971 > success:<request_delete_range:<key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" > > failure:<request_range:<key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" > >"}
	{"level":"info","ts":"2025-10-13T14:09:13.908220Z","caller":"traceutil/trace.go:172","msg":"trace[2073246272] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2983; }","duration":"347.573522ms","start":"2025-10-13T14:09:13.560640Z","end":"2025-10-13T14:09:13.908213Z","steps":["trace[2073246272] 'process raft request'  (duration: 346.576205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T14:09:13.908282Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.560629Z","time spent":"347.615581ms","remote":"127.0.0.1:33684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":59,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/metrics-server\" mod_revision:2972 > success:<request_delete_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > > failure:<request_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > >"}
	{"level":"warn","ts":"2025-10-13T14:09:13.910053Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.064409ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T14:09:13.910727Z","caller":"traceutil/trace.go:172","msg":"trace[1060924441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2983; }","duration":"217.741397ms","start":"2025-10-13T14:09:13.692976Z","end":"2025-10-13T14:09:13.910718Z","steps":["trace[1060924441] 'agreement among raft nodes before linearized reading'  (duration: 216.722483ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:10:52.476707Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2368}
	{"level":"info","ts":"2025-10-13T14:10:52.510907Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2368,"took":"32.98551ms","hash":1037835104,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5537792,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-13T14:10:52.510982Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1037835104,"revision":2368,"compact-revision":1907}
	
	
	==> kernel <==
	 14:14:55 up 19 min,  0 users,  load average: 0.54, 0.78, 0.72
	Linux addons-214022 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [da79537fc9aee4eda997318cc0aeef07f5a4e3bbd4aed4282ff9e486eecb0cd7] <==
	I1013 14:08:25.024102       1 handler.go:285] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.588117       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.763275       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1013 14:08:25.806287       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1013 14:08:25.836075       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.910579       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.938831       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W1013 14:08:26.095661       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1013 14:08:26.314291       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:26.607638       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	I1013 14:08:26.637481       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:26.689652       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1013 14:08:26.941141       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1013 14:08:26.941574       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1013 14:08:26.961310       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	I1013 14:08:27.080209       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:27.138121       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1013 14:08:28.080963       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1013 14:08:28.086493       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1013 14:08:45.022422       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:40132: use of closed network connection
	E1013 14:08:45.229592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:40168: use of closed network connection
	I1013 14:08:54.741628       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.41.148"}
	I1013 14:09:48.903970       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1013 14:11:31.775897       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1013 14:11:31.990340       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.79.22"}
	
	
	==> kube-controller-manager [05cee8f966b4938e3d1606d404d9401b9949f288ba68c08a76c3856610945ee7] <==
	E1013 14:13:54.114363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:01.537319       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1013 14:14:02.085799       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:02.088979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:13.594590       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:13.595832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:14.330599       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:14.331748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:16.536225       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1013 14:14:18.885939       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:18.888181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:31.536661       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1013 14:14:31.578343       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:31.579874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:31.666963       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:31.668799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:33.574779       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:33.576460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:41.981944       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:41.983086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:46.447147       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:46.448784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:46.538730       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1013 14:14:48.845352       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:48.847580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [0daa3279505d674c83f3e6813f82b58744dbeede0c9d8a5f5e902c9d9cca7441] <==
	I1013 13:56:04.284946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 13:56:04.385972       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 13:56:04.386554       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.214"]
	E1013 13:56:04.387583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 13:56:04.791284       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 13:56:04.792086       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 13:56:04.792127       1 server_linux.go:132] "Using iptables Proxier"
	I1013 13:56:04.830526       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 13:56:04.832819       1 server.go:527] "Version info" version="v1.34.1"
	I1013 13:56:04.832853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 13:56:04.853725       1 config.go:200] "Starting service config controller"
	I1013 13:56:04.853757       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 13:56:04.853901       1 config.go:106] "Starting endpoint slice config controller"
	I1013 13:56:04.853927       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 13:56:04.854547       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 13:56:04.854575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 13:56:04.862975       1 config.go:309] "Starting node config controller"
	I1013 13:56:04.863007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 13:56:04.863015       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 13:56:04.956286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 13:56:04.956330       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 13:56:04.957110       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [84834930aaa277a8e849b685332e6fb4b453bbc88da065fb1d682e6c39de1c89] <==
	E1013 13:55:54.569998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:54.570036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:54.570113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:54.570148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:54.570176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 13:55:54.570210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:54.570246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 13:55:54.569635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:54.571687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.412211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:55.434014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 13:55:55.466581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 13:55:55.489914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.548770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:55.605071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 13:55:55.677154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:55.682700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 13:55:55.710259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:55.717675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 13:55:55.763499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 13:55:55.780817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:55.877364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:55.895577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 13:55:55.926098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1013 13:55:58.161609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 14:14:27 addons-214022 kubelet[1511]: E1013 14:14:27.379795    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:14:27 addons-214022 kubelet[1511]: I1013 14:14:27.451735    1511 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b533bbd9-f503-4f2a-aec6-05a5d0a352d9-config-volume\") on node \"addons-214022\" DevicePath \"\""
	Oct 13 14:14:27 addons-214022 kubelet[1511]: I1013 14:14:27.451771    1511 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x8t45\" (UniqueName: \"kubernetes.io/projected/b533bbd9-f503-4f2a-aec6-05a5d0a352d9-kube-api-access-x8t45\") on node \"addons-214022\" DevicePath \"\""
	Oct 13 14:14:29 addons-214022 kubelet[1511]: I1013 14:14:29.379810    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b533bbd9-f503-4f2a-aec6-05a5d0a352d9" path="/var/lib/kubelet/pods/b533bbd9-f503-4f2a-aec6-05a5d0a352d9/volumes"
	Oct 13 14:14:30 addons-214022 kubelet[1511]: E1013 14:14:30.983657    1511 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 13 14:14:30 addons-214022 kubelet[1511]: E1013 14:14:30.983791    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c1885cc-c9ac-48aa-bfe5-5873197f65f5-gcr-creds podName:3c1885cc-c9ac-48aa-bfe5-5873197f65f5 nodeName:}" failed. No retries permitted until 2025-10-13 14:16:32.983761917 +0000 UTC m=+1235.743669492 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/3c1885cc-c9ac-48aa-bfe5-5873197f65f5-gcr-creds") pod "registry-creds-764b6fb674-rsjlm" (UID: "3c1885cc-c9ac-48aa-bfe5-5873197f65f5") : secret "registry-creds-gcr" not found
	Oct 13 14:14:32 addons-214022 kubelet[1511]: E1013 14:14:32.376041    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:14:33 addons-214022 kubelet[1511]: I1013 14:14:33.375945    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:14:33 addons-214022 kubelet[1511]: E1013 14:14:33.377604    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:14:38 addons-214022 kubelet[1511]: E1013 14:14:38.569919    1511 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 13 14:14:38 addons-214022 kubelet[1511]: E1013 14:14:38.569998    1511 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 13 14:14:38 addons-214022 kubelet[1511]: E1013 14:14:38.570730    1511 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:14:38 addons-214022 kubelet[1511]: E1013 14:14:38.570924    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:14:41 addons-214022 kubelet[1511]: I1013 14:14:41.375527    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qdl2b" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:14:45 addons-214022 kubelet[1511]: I1013 14:14:45.376960    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:14:45 addons-214022 kubelet[1511]: E1013 14:14:45.377549    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:14:46 addons-214022 kubelet[1511]: E1013 14:14:46.380294    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/registry-creds-764b6fb674-rsjlm" podUID="3c1885cc-c9ac-48aa-bfe5-5873197f65f5"
	Oct 13 14:14:46 addons-214022 kubelet[1511]: I1013 14:14:46.524580    1511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4hwd\" (UniqueName: \"kubernetes.io/projected/3c1885cc-c9ac-48aa-bfe5-5873197f65f5-kube-api-access-h4hwd\") pod \"3c1885cc-c9ac-48aa-bfe5-5873197f65f5\" (UID: \"3c1885cc-c9ac-48aa-bfe5-5873197f65f5\") "
	Oct 13 14:14:46 addons-214022 kubelet[1511]: I1013 14:14:46.530766    1511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c1885cc-c9ac-48aa-bfe5-5873197f65f5-kube-api-access-h4hwd" (OuterVolumeSpecName: "kube-api-access-h4hwd") pod "3c1885cc-c9ac-48aa-bfe5-5873197f65f5" (UID: "3c1885cc-c9ac-48aa-bfe5-5873197f65f5"). InnerVolumeSpecName "kube-api-access-h4hwd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 13 14:14:46 addons-214022 kubelet[1511]: I1013 14:14:46.626291    1511 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h4hwd\" (UniqueName: \"kubernetes.io/projected/3c1885cc-c9ac-48aa-bfe5-5873197f65f5-kube-api-access-h4hwd\") on node \"addons-214022\" DevicePath \"\""
	Oct 13 14:14:47 addons-214022 kubelet[1511]: I1013 14:14:47.378182    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:14:47 addons-214022 kubelet[1511]: E1013 14:14:47.379044    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:14:47 addons-214022 kubelet[1511]: I1013 14:14:47.533804    1511 reconciler_common.go:299] "Volume detached for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/3c1885cc-c9ac-48aa-bfe5-5873197f65f5-gcr-creds\") on node \"addons-214022\" DevicePath \"\""
	Oct 13 14:14:49 addons-214022 kubelet[1511]: I1013 14:14:49.378570    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c1885cc-c9ac-48aa-bfe5-5873197f65f5" path="/var/lib/kubelet/pods/3c1885cc-c9ac-48aa-bfe5-5873197f65f5/volumes"
	Oct 13 14:14:52 addons-214022 kubelet[1511]: E1013 14:14:52.377591    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	
	
	==> storage-provisioner [61d2e3b41e535c2d6e45412739c6b7e475d5a6aef5eb620041ffb9e4f7f53d5d] <==
	W1013 14:14:30.122970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:32.127692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:32.134359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:34.138584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:34.145546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:36.149955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:36.156824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:38.161192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:38.171482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:40.175892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:40.183926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:42.196474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:42.202540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:44.206332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:44.211910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:46.215784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:46.222007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:48.226420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:48.231981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:50.235966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:50.242443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:52.246820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:52.255589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:54.259714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:54.270715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q: exit status 1 (130.78272ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214022/192.168.39.214
	Start Time:       Mon, 13 Oct 2025 14:11:31 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qhpgc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qhpgc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m25s                default-scheduler  Successfully assigned default/nginx to addons-214022
	  Normal   Pulling    18s (x5 over 3m24s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     18s (x5 over 3m24s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     18s (x5 over 3m24s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x12 over 3m23s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4s (x12 over 3m23s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214022/192.168.39.214
	Start Time:       Mon, 13 Oct 2025 14:09:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cpq8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-cpq8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m41s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-214022
	  Normal   Pulling    2m43s (x5 over 5m41s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m43s (x5 over 5m40s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m43s (x5 over 5m40s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    39s (x21 over 5m40s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     39s (x21 over 5m40s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wxvk (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-8wxvk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rn6ng" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kvlpb" not found
	Error from server (NotFound): pods "registry-66898fdd98-qpt8q" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (363.16s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-214022 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-214022 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-214022 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-13 14:19:32.328497542 +0000 UTC m=+1463.269055925
addons_test.go:252: (dbg) Run:  kubectl --context addons-214022 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-214022 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-214022/192.168.39.214
Start Time:       Mon, 13 Oct 2025 14:11:31 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.32
IPs:
IP:  10.244.0.32
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qhpgc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qhpgc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/nginx to addons-214022
Normal   Pulling    4m54s (x5 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     4m54s (x5 over 8m)      kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m54s (x5 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     2m51s (x20 over 7m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m38s (x21 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run:  kubectl --context addons-214022 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-214022 logs nginx -n default: exit status 1 (75.665101ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-214022 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214022 -n addons-214022
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 logs -n 25: (1.444298225s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-459703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                          │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ --download-only -p binary-mirror-039949 --alsologtostderr --binary-mirror http://127.0.0.1:46019 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ -p binary-mirror-039949                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ addons  │ enable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ start   │ -p addons-214022 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 14:02 UTC │
	│ addons  │ addons-214022 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ addons-214022 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ enable headlamp -p addons-214022 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ addons-214022 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:11 UTC │ 13 Oct 25 14:11 UTC │
	│ addons  │ addons-214022 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:13 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ addons-214022 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ addons-214022 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ addons-214022 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ addons-214022 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:15 UTC │ 13 Oct 25 14:15 UTC │
	│ addons  │ addons-214022 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:15 UTC │ 13 Oct 25 14:15 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 13:55:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 13:55:20.628679 1815551 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:55:20.628995 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629006 1815551 out.go:374] Setting ErrFile to fd 2...
	I1013 13:55:20.629013 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629212 1815551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 13:55:20.629832 1815551 out.go:368] Setting JSON to false
	I1013 13:55:20.630822 1815551 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":20269,"bootTime":1760343452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:55:20.630927 1815551 start.go:141] virtualization: kvm guest
	I1013 13:55:20.633155 1815551 out.go:179] * [addons-214022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 13:55:20.634757 1815551 notify.go:220] Checking for updates...
	I1013 13:55:20.634845 1815551 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 13:55:20.636374 1815551 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:55:20.637880 1815551 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:55:20.639342 1815551 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:20.640732 1815551 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 13:55:20.642003 1815551 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 13:55:20.643600 1815551 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:55:20.674859 1815551 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 13:55:20.676415 1815551 start.go:305] selected driver: kvm2
	I1013 13:55:20.676432 1815551 start.go:925] validating driver "kvm2" against <nil>
	I1013 13:55:20.676444 1815551 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 13:55:20.677121 1815551 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.677210 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.691866 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.691903 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.705734 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.705799 1815551 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 13:55:20.706090 1815551 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:55:20.706122 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:20.706178 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:20.706190 1815551 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 13:55:20.706245 1815551 start.go:349] cluster config:
	{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:20.706362 1815551 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.708302 1815551 out.go:179] * Starting "addons-214022" primary control-plane node in "addons-214022" cluster
	I1013 13:55:20.709605 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:20.709652 1815551 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 13:55:20.709667 1815551 cache.go:58] Caching tarball of preloaded images
	I1013 13:55:20.709799 1815551 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 13:55:20.709812 1815551 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 13:55:20.710191 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:20.710220 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json: {Name:mkc10ba1ef1459bd83ba3e9e0ba7c33fe1be6a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:20.710388 1815551 start.go:360] acquireMachinesLock for addons-214022: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 13:55:20.710457 1815551 start.go:364] duration metric: took 51.101µs to acquireMachinesLock for "addons-214022"
	I1013 13:55:20.710480 1815551 start.go:93] Provisioning new machine with config: &{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:55:20.710555 1815551 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 13:55:20.713031 1815551 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1013 13:55:20.713207 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:55:20.713262 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:55:20.727020 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I1013 13:55:20.727515 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:55:20.728150 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:55:20.728183 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:55:20.728607 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:55:20.728846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:20.729028 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:20.729259 1815551 start.go:159] libmachine.API.Create for "addons-214022" (driver="kvm2")
	I1013 13:55:20.729295 1815551 client.go:168] LocalClient.Create starting
	I1013 13:55:20.729337 1815551 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem
	I1013 13:55:20.759138 1815551 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem
	I1013 13:55:21.004098 1815551 main.go:141] libmachine: Running pre-create checks...
	I1013 13:55:21.004128 1815551 main.go:141] libmachine: (addons-214022) Calling .PreCreateCheck
	I1013 13:55:21.004821 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:21.005397 1815551 main.go:141] libmachine: Creating machine...
	I1013 13:55:21.005413 1815551 main.go:141] libmachine: (addons-214022) Calling .Create
	I1013 13:55:21.005675 1815551 main.go:141] libmachine: (addons-214022) creating domain...
	I1013 13:55:21.005726 1815551 main.go:141] libmachine: (addons-214022) creating network...
	I1013 13:55:21.007263 1815551 main.go:141] libmachine: (addons-214022) DBG | found existing default network
	I1013 13:55:21.007531 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.007563 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>default</name>
	I1013 13:55:21.007591 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 13:55:21.007612 1815551 main.go:141] libmachine: (addons-214022) DBG |   <forward mode='nat'>
	I1013 13:55:21.007625 1815551 main.go:141] libmachine: (addons-214022) DBG |     <nat>
	I1013 13:55:21.007636 1815551 main.go:141] libmachine: (addons-214022) DBG |       <port start='1024' end='65535'/>
	I1013 13:55:21.007652 1815551 main.go:141] libmachine: (addons-214022) DBG |     </nat>
	I1013 13:55:21.007667 1815551 main.go:141] libmachine: (addons-214022) DBG |   </forward>
	I1013 13:55:21.007675 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 13:55:21.007684 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 13:55:21.007690 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 13:55:21.007709 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.007733 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 13:55:21.007742 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.007750 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.007756 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.007766 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008295 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.008109 1815579 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045c0}
	I1013 13:55:21.008354 1815551 main.go:141] libmachine: (addons-214022) DBG | defining private network:
	I1013 13:55:21.008379 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008393 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.008408 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.008433 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.008451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.008458 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.008463 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.008471 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.008475 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.008480 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.008486 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.014811 1815551 main.go:141] libmachine: (addons-214022) DBG | creating private network mk-addons-214022 192.168.39.0/24...
	I1013 13:55:21.089953 1815551 main.go:141] libmachine: (addons-214022) DBG | private network mk-addons-214022 192.168.39.0/24 created
	I1013 13:55:21.090269 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.090299 1815551 main.go:141] libmachine: (addons-214022) setting up store path in /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.090308 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.090321 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>9289d330-dce4-4691-9e5d-0346b93e6814</uuid>
	I1013 13:55:21.090330 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1013 13:55:21.090340 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:03:10:f8'/>
	I1013 13:55:21.090351 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.090359 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.090366 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.090372 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.090379 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.090384 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.090402 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.090414 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.090424 1815551 main.go:141] libmachine: (addons-214022) building disk image from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 13:55:21.090432 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.090246 1815579 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.090457 1815551 main.go:141] libmachine: (addons-214022) Downloading /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 13:55:21.389435 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.389286 1815579 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa...
	I1013 13:55:21.573462 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573304 1815579 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk...
	I1013 13:55:21.573488 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing magic tar header
	I1013 13:55:21.573505 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing SSH key tar header
	I1013 13:55:21.573516 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573436 1815579 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.573528 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022
	I1013 13:55:21.573596 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 (perms=drwx------)
	I1013 13:55:21.573620 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines
	I1013 13:55:21.573632 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines (perms=drwxr-xr-x)
	I1013 13:55:21.573648 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube (perms=drwxr-xr-x)
	I1013 13:55:21.573659 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975 (perms=drwxrwxr-x)
	I1013 13:55:21.573667 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 13:55:21.573674 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 13:55:21.573684 1815551 main.go:141] libmachine: (addons-214022) defining domain...
	I1013 13:55:21.573701 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.573728 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975
	I1013 13:55:21.573769 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 13:55:21.573794 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins
	I1013 13:55:21.573812 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home
	I1013 13:55:21.573827 1815551 main.go:141] libmachine: (addons-214022) DBG | skipping /home - not owner
	I1013 13:55:21.574972 1815551 main.go:141] libmachine: (addons-214022) defining domain using XML: 
	I1013 13:55:21.574985 1815551 main.go:141] libmachine: (addons-214022) <domain type='kvm'>
	I1013 13:55:21.574990 1815551 main.go:141] libmachine: (addons-214022)   <name>addons-214022</name>
	I1013 13:55:21.575002 1815551 main.go:141] libmachine: (addons-214022)   <memory unit='MiB'>4096</memory>
	I1013 13:55:21.575009 1815551 main.go:141] libmachine: (addons-214022)   <vcpu>2</vcpu>
	I1013 13:55:21.575015 1815551 main.go:141] libmachine: (addons-214022)   <features>
	I1013 13:55:21.575023 1815551 main.go:141] libmachine: (addons-214022)     <acpi/>
	I1013 13:55:21.575032 1815551 main.go:141] libmachine: (addons-214022)     <apic/>
	I1013 13:55:21.575059 1815551 main.go:141] libmachine: (addons-214022)     <pae/>
	I1013 13:55:21.575077 1815551 main.go:141] libmachine: (addons-214022)   </features>
	I1013 13:55:21.575100 1815551 main.go:141] libmachine: (addons-214022)   <cpu mode='host-passthrough'>
	I1013 13:55:21.575110 1815551 main.go:141] libmachine: (addons-214022)   </cpu>
	I1013 13:55:21.575122 1815551 main.go:141] libmachine: (addons-214022)   <os>
	I1013 13:55:21.575132 1815551 main.go:141] libmachine: (addons-214022)     <type>hvm</type>
	I1013 13:55:21.575141 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='cdrom'/>
	I1013 13:55:21.575151 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='hd'/>
	I1013 13:55:21.575162 1815551 main.go:141] libmachine: (addons-214022)     <bootmenu enable='no'/>
	I1013 13:55:21.575179 1815551 main.go:141] libmachine: (addons-214022)   </os>
	I1013 13:55:21.575186 1815551 main.go:141] libmachine: (addons-214022)   <devices>
	I1013 13:55:21.575192 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='cdrom'>
	I1013 13:55:21.575201 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.575208 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.575216 1815551 main.go:141] libmachine: (addons-214022)       <readonly/>
	I1013 13:55:21.575224 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575234 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='disk'>
	I1013 13:55:21.575251 1815551 main.go:141] libmachine: (addons-214022)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 13:55:21.575272 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.575286 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.575296 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575307 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575317 1815551 main.go:141] libmachine: (addons-214022)       <source network='mk-addons-214022'/>
	I1013 13:55:21.575329 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575339 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575356 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575374 1815551 main.go:141] libmachine: (addons-214022)       <source network='default'/>
	I1013 13:55:21.575392 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575408 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575416 1815551 main.go:141] libmachine: (addons-214022)     <serial type='pty'>
	I1013 13:55:21.575422 1815551 main.go:141] libmachine: (addons-214022)       <target port='0'/>
	I1013 13:55:21.575433 1815551 main.go:141] libmachine: (addons-214022)     </serial>
	I1013 13:55:21.575443 1815551 main.go:141] libmachine: (addons-214022)     <console type='pty'>
	I1013 13:55:21.575453 1815551 main.go:141] libmachine: (addons-214022)       <target type='serial' port='0'/>
	I1013 13:55:21.575463 1815551 main.go:141] libmachine: (addons-214022)     </console>
	I1013 13:55:21.575475 1815551 main.go:141] libmachine: (addons-214022)     <rng model='virtio'>
	I1013 13:55:21.575486 1815551 main.go:141] libmachine: (addons-214022)       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.575495 1815551 main.go:141] libmachine: (addons-214022)     </rng>
	I1013 13:55:21.575507 1815551 main.go:141] libmachine: (addons-214022)   </devices>
	I1013 13:55:21.575519 1815551 main.go:141] libmachine: (addons-214022) </domain>
	I1013 13:55:21.575530 1815551 main.go:141] libmachine: (addons-214022) 
	I1013 13:55:21.580981 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:54:97:7f in network default
	I1013 13:55:21.581682 1815551 main.go:141] libmachine: (addons-214022) starting domain...
	I1013 13:55:21.581698 1815551 main.go:141] libmachine: (addons-214022) ensuring networks are active...
	I1013 13:55:21.581746 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:21.582514 1815551 main.go:141] libmachine: (addons-214022) Ensuring network default is active
	I1013 13:55:21.583076 1815551 main.go:141] libmachine: (addons-214022) Ensuring network mk-addons-214022 is active
	I1013 13:55:21.583880 1815551 main.go:141] libmachine: (addons-214022) getting domain XML...
	I1013 13:55:21.585201 1815551 main.go:141] libmachine: (addons-214022) DBG | starting domain XML:
	I1013 13:55:21.585220 1815551 main.go:141] libmachine: (addons-214022) DBG | <domain type='kvm'>
	I1013 13:55:21.585231 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>addons-214022</name>
	I1013 13:55:21.585241 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c368161c-2753-46d2-a9ea-3f8a7f4ac862</uuid>
	I1013 13:55:21.585272 1815551 main.go:141] libmachine: (addons-214022) DBG |   <memory unit='KiB'>4194304</memory>
	I1013 13:55:21.585285 1815551 main.go:141] libmachine: (addons-214022) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1013 13:55:21.585295 1815551 main.go:141] libmachine: (addons-214022) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 13:55:21.585304 1815551 main.go:141] libmachine: (addons-214022) DBG |   <os>
	I1013 13:55:21.585317 1815551 main.go:141] libmachine: (addons-214022) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 13:55:21.585324 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='cdrom'/>
	I1013 13:55:21.585329 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='hd'/>
	I1013 13:55:21.585345 1815551 main.go:141] libmachine: (addons-214022) DBG |     <bootmenu enable='no'/>
	I1013 13:55:21.585358 1815551 main.go:141] libmachine: (addons-214022) DBG |   </os>
	I1013 13:55:21.585369 1815551 main.go:141] libmachine: (addons-214022) DBG |   <features>
	I1013 13:55:21.585391 1815551 main.go:141] libmachine: (addons-214022) DBG |     <acpi/>
	I1013 13:55:21.585403 1815551 main.go:141] libmachine: (addons-214022) DBG |     <apic/>
	I1013 13:55:21.585411 1815551 main.go:141] libmachine: (addons-214022) DBG |     <pae/>
	I1013 13:55:21.585428 1815551 main.go:141] libmachine: (addons-214022) DBG |   </features>
	I1013 13:55:21.585439 1815551 main.go:141] libmachine: (addons-214022) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 13:55:21.585443 1815551 main.go:141] libmachine: (addons-214022) DBG |   <clock offset='utc'/>
	I1013 13:55:21.585451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 13:55:21.585456 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_reboot>restart</on_reboot>
	I1013 13:55:21.585464 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_crash>destroy</on_crash>
	I1013 13:55:21.585467 1815551 main.go:141] libmachine: (addons-214022) DBG |   <devices>
	I1013 13:55:21.585476 1815551 main.go:141] libmachine: (addons-214022) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 13:55:21.585483 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='cdrom'>
	I1013 13:55:21.585490 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw'/>
	I1013 13:55:21.585499 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.585530 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.585553 1815551 main.go:141] libmachine: (addons-214022) DBG |       <readonly/>
	I1013 13:55:21.585566 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 13:55:21.585582 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585595 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='disk'>
	I1013 13:55:21.585608 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 13:55:21.585626 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.585638 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.585652 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 13:55:21.585666 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585680 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 13:55:21.585693 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 13:55:21.585706 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585726 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 13:55:21.585741 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 13:55:21.585760 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 13:55:21.585769 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585773 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585778 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:45:c6:7b'/>
	I1013 13:55:21.585783 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='mk-addons-214022'/>
	I1013 13:55:21.585787 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585793 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 13:55:21.585797 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585801 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585806 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:54:97:7f'/>
	I1013 13:55:21.585810 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='default'/>
	I1013 13:55:21.585815 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585823 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 13:55:21.585828 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585834 1815551 main.go:141] libmachine: (addons-214022) DBG |     <serial type='pty'>
	I1013 13:55:21.585840 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='isa-serial' port='0'>
	I1013 13:55:21.585849 1815551 main.go:141] libmachine: (addons-214022) DBG |         <model name='isa-serial'/>
	I1013 13:55:21.585856 1815551 main.go:141] libmachine: (addons-214022) DBG |       </target>
	I1013 13:55:21.585860 1815551 main.go:141] libmachine: (addons-214022) DBG |     </serial>
	I1013 13:55:21.585867 1815551 main.go:141] libmachine: (addons-214022) DBG |     <console type='pty'>
	I1013 13:55:21.585871 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='serial' port='0'/>
	I1013 13:55:21.585878 1815551 main.go:141] libmachine: (addons-214022) DBG |     </console>
	I1013 13:55:21.585882 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='mouse' bus='ps2'/>
	I1013 13:55:21.585888 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 13:55:21.585895 1815551 main.go:141] libmachine: (addons-214022) DBG |     <audio id='1' type='none'/>
	I1013 13:55:21.585900 1815551 main.go:141] libmachine: (addons-214022) DBG |     <memballoon model='virtio'>
	I1013 13:55:21.585905 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 13:55:21.585912 1815551 main.go:141] libmachine: (addons-214022) DBG |     </memballoon>
	I1013 13:55:21.585920 1815551 main.go:141] libmachine: (addons-214022) DBG |     <rng model='virtio'>
	I1013 13:55:21.585937 1815551 main.go:141] libmachine: (addons-214022) DBG |       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.585942 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 13:55:21.585947 1815551 main.go:141] libmachine: (addons-214022) DBG |     </rng>
	I1013 13:55:21.585950 1815551 main.go:141] libmachine: (addons-214022) DBG |   </devices>
	I1013 13:55:21.585955 1815551 main.go:141] libmachine: (addons-214022) DBG | </domain>
	I1013 13:55:21.585958 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.998506 1815551 main.go:141] libmachine: (addons-214022) waiting for domain to start...
	I1013 13:55:21.999992 1815551 main.go:141] libmachine: (addons-214022) domain is now running
	I1013 13:55:22.000011 1815551 main.go:141] libmachine: (addons-214022) waiting for IP...
	I1013 13:55:22.000803 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.001255 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.001289 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.001544 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.001627 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.001556 1815579 retry.go:31] will retry after 233.588452ms: waiting for domain to come up
	I1013 13:55:22.236968 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.237478 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.237508 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.237876 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.237928 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.237848 1815579 retry.go:31] will retry after 300.8157ms: waiting for domain to come up
	I1013 13:55:22.540639 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.541271 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.541302 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.541621 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.541683 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.541605 1815579 retry.go:31] will retry after 377.651668ms: waiting for domain to come up
	I1013 13:55:22.921184 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.921783 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.921814 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.922148 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.922237 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.922151 1815579 retry.go:31] will retry after 510.251488ms: waiting for domain to come up
	I1013 13:55:23.433846 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:23.434356 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:23.434384 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:23.434622 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:23.434651 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:23.434592 1815579 retry.go:31] will retry after 738.765721ms: waiting for domain to come up
	I1013 13:55:24.174730 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:24.175220 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:24.175261 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:24.175609 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:24.175645 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:24.175615 1815579 retry.go:31] will retry after 941.377797ms: waiting for domain to come up
	I1013 13:55:25.118416 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.119134 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.119161 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.119505 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.119531 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.119464 1815579 retry.go:31] will retry after 715.698221ms: waiting for domain to come up
	I1013 13:55:25.837169 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.837602 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.837632 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.837919 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.837956 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.837912 1815579 retry.go:31] will retry after 1.477632519s: waiting for domain to come up
	I1013 13:55:27.317869 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:27.318416 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:27.318445 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:27.318730 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:27.318828 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:27.318742 1815579 retry.go:31] will retry after 1.752025896s: waiting for domain to come up
	I1013 13:55:29.072255 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:29.072804 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:29.072827 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:29.073152 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:29.073218 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:29.073146 1815579 retry.go:31] will retry after 1.890403935s: waiting for domain to come up
	I1013 13:55:30.965205 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:30.965861 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:30.965889 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:30.966181 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:30.966249 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:30.966169 1815579 retry.go:31] will retry after 2.015469115s: waiting for domain to come up
	I1013 13:55:32.984641 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:32.985205 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:32.985254 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:32.985538 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:32.985566 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:32.985483 1815579 retry.go:31] will retry after 3.162648802s: waiting for domain to come up
	I1013 13:55:36.149428 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150058 1815551 main.go:141] libmachine: (addons-214022) found domain IP: 192.168.39.214
	I1013 13:55:36.150084 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has current primary IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150093 1815551 main.go:141] libmachine: (addons-214022) reserving static IP address...
	I1013 13:55:36.150509 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find host DHCP lease matching {name: "addons-214022", mac: "52:54:00:45:c6:7b", ip: "192.168.39.214"} in network mk-addons-214022
	I1013 13:55:36.359631 1815551 main.go:141] libmachine: (addons-214022) DBG | Getting to WaitForSSH function...
	I1013 13:55:36.359656 1815551 main.go:141] libmachine: (addons-214022) reserved static IP address 192.168.39.214 for domain addons-214022
	I1013 13:55:36.359708 1815551 main.go:141] libmachine: (addons-214022) waiting for SSH...
	I1013 13:55:36.362970 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363545 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.363578 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363975 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH client type: external
	I1013 13:55:36.364005 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa (-rw-------)
	I1013 13:55:36.364071 1815551 main.go:141] libmachine: (addons-214022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 13:55:36.364096 1815551 main.go:141] libmachine: (addons-214022) DBG | About to run SSH command:
	I1013 13:55:36.364112 1815551 main.go:141] libmachine: (addons-214022) DBG | exit 0
	I1013 13:55:36.500938 1815551 main.go:141] libmachine: (addons-214022) DBG | SSH cmd err, output: <nil>: 
	I1013 13:55:36.501251 1815551 main.go:141] libmachine: (addons-214022) domain creation complete
	I1013 13:55:36.501689 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:36.502339 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502549 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502694 1815551 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1013 13:55:36.502705 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:55:36.504172 1815551 main.go:141] libmachine: Detecting operating system of created instance...
	I1013 13:55:36.504188 1815551 main.go:141] libmachine: Waiting for SSH to be available...
	I1013 13:55:36.504195 1815551 main.go:141] libmachine: Getting to WaitForSSH function...
	I1013 13:55:36.504201 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.507156 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507596 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.507626 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507811 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.508003 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508123 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508334 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.508503 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.508771 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.508786 1815551 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1013 13:55:36.609679 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.609706 1815551 main.go:141] libmachine: Detecting the provisioner...
	I1013 13:55:36.609725 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.612870 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613343 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.613380 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613602 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.613846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614017 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614155 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.614343 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.614556 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.614568 1815551 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1013 13:55:36.717397 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1013 13:55:36.717477 1815551 main.go:141] libmachine: found compatible host: buildroot
	I1013 13:55:36.717487 1815551 main.go:141] libmachine: Provisioning with buildroot...
	I1013 13:55:36.717495 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.717788 1815551 buildroot.go:166] provisioning hostname "addons-214022"
	I1013 13:55:36.717829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.718042 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.721497 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.721988 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.722027 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.722260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.722429 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722542 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722660 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.722864 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.723104 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.723120 1815551 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214022 && echo "addons-214022" | sudo tee /etc/hostname
	I1013 13:55:36.853529 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214022
	
	I1013 13:55:36.853563 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.856617 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857071 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.857100 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.857522 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857852 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.858072 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.858351 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.858378 1815551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214022/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 13:55:36.978438 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.978492 1815551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 13:55:36.978561 1815551 buildroot.go:174] setting up certificates
	I1013 13:55:36.978581 1815551 provision.go:84] configureAuth start
	I1013 13:55:36.978601 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.978932 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:36.982111 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982557 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.982587 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982769 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.985722 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986132 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.986153 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986337 1815551 provision.go:143] copyHostCerts
	I1013 13:55:36.986421 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 13:55:36.986610 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 13:55:36.986700 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 13:55:36.986789 1815551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.addons-214022 san=[127.0.0.1 192.168.39.214 addons-214022 localhost minikube]
	I1013 13:55:37.044634 1815551 provision.go:177] copyRemoteCerts
	I1013 13:55:37.044706 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 13:55:37.044750 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.047881 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048238 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.048268 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048531 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.048757 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.048938 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.049093 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.132357 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 13:55:37.163230 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 13:55:37.193519 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 13:55:37.228041 1815551 provision.go:87] duration metric: took 249.44117ms to configureAuth
	I1013 13:55:37.228073 1815551 buildroot.go:189] setting minikube options for container-runtime
	I1013 13:55:37.228284 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:55:37.228308 1815551 main.go:141] libmachine: Checking connection to Docker...
	I1013 13:55:37.228319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetURL
	I1013 13:55:37.229621 1815551 main.go:141] libmachine: (addons-214022) DBG | using libvirt version 8000000
	I1013 13:55:37.231977 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232573 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.232594 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232944 1815551 main.go:141] libmachine: Docker is up and running!
	I1013 13:55:37.232959 1815551 main.go:141] libmachine: Reticulating splines...
	I1013 13:55:37.232967 1815551 client.go:171] duration metric: took 16.503662992s to LocalClient.Create
	I1013 13:55:37.232989 1815551 start.go:167] duration metric: took 16.503732898s to libmachine.API.Create "addons-214022"
	I1013 13:55:37.232996 1815551 start.go:293] postStartSetup for "addons-214022" (driver="kvm2")
	I1013 13:55:37.233004 1815551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 13:55:37.233019 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.233334 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 13:55:37.233364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.236079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236495 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.236524 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.237136 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.237319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.237840 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.320344 1815551 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 13:55:37.325903 1815551 info.go:137] Remote host: Buildroot 2025.02
	I1013 13:55:37.325945 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 13:55:37.326098 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 13:55:37.326125 1815551 start.go:296] duration metric: took 93.124024ms for postStartSetup
	I1013 13:55:37.326165 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:37.326907 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.329757 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330258 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.330288 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330612 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:37.330830 1815551 start.go:128] duration metric: took 16.620261949s to createHost
	I1013 13:55:37.330855 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.334094 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334644 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.334674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334903 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.335118 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335505 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.335738 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:37.336080 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:37.336099 1815551 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 13:55:37.453534 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760363737.403582342
	
	I1013 13:55:37.453567 1815551 fix.go:216] guest clock: 1760363737.403582342
	I1013 13:55:37.453576 1815551 fix.go:229] Guest: 2025-10-13 13:55:37.403582342 +0000 UTC Remote: 2025-10-13 13:55:37.33084379 +0000 UTC m=+16.741419072 (delta=72.738552ms)
	I1013 13:55:37.453601 1815551 fix.go:200] guest clock delta is within tolerance: 72.738552ms
	I1013 13:55:37.453614 1815551 start.go:83] releasing machines lock for "addons-214022", held for 16.74313679s
	I1013 13:55:37.453644 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.453996 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.457079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457464 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.457495 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457681 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458199 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458359 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458457 1815551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 13:55:37.458521 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.458571 1815551 ssh_runner.go:195] Run: cat /version.json
	I1013 13:55:37.458594 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.461592 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462001 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462030 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462059 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462230 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.462419 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.462580 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.462613 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462638 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462750 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.462894 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.463074 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.463216 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.463355 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.568362 1815551 ssh_runner.go:195] Run: systemctl --version
	I1013 13:55:37.574961 1815551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 13:55:37.581570 1815551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 13:55:37.581652 1815551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 13:55:37.601744 1815551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 13:55:37.601771 1815551 start.go:495] detecting cgroup driver to use...
	I1013 13:55:37.601855 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 13:55:37.636399 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 13:55:37.653284 1815551 docker.go:218] disabling cri-docker service (if available) ...
	I1013 13:55:37.653349 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 13:55:37.671035 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 13:55:37.687997 1815551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 13:55:37.835046 1815551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 13:55:38.036660 1815551 docker.go:234] disabling docker service ...
	I1013 13:55:38.036785 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 13:55:38.054634 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 13:55:38.070992 1815551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 13:55:38.226219 1815551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 13:55:38.375133 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 13:55:38.391629 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 13:55:38.415622 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 13:55:38.428382 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 13:55:38.441166 1815551 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 13:55:38.441271 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 13:55:38.454185 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.467219 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 13:55:38.480016 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.493623 1815551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 13:55:38.507533 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 13:55:38.520643 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 13:55:38.533755 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 13:55:38.546971 1815551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 13:55:38.557881 1815551 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 13:55:38.557958 1815551 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 13:55:38.578224 1815551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 13:55:38.590424 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:38.732726 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:38.770576 1815551 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 13:55:38.770707 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:38.776353 1815551 retry.go:31] will retry after 1.261164496s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 13:55:40.038886 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:40.045830 1815551 start.go:563] Will wait 60s for crictl version
	I1013 13:55:40.045914 1815551 ssh_runner.go:195] Run: which crictl
	I1013 13:55:40.050941 1815551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 13:55:40.093318 1815551 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 13:55:40.093432 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.123924 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.255787 1815551 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 13:55:40.331568 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:40.334892 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335313 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:40.335337 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335632 1815551 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 13:55:40.341286 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:40.357723 1815551 kubeadm.go:883] updating cluster {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 13:55:40.357874 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:40.357947 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:40.395630 1815551 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 13:55:40.395736 1815551 ssh_runner.go:195] Run: which lz4
	I1013 13:55:40.400778 1815551 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 13:55:40.406306 1815551 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 13:55:40.406344 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (409015552 bytes)
	I1013 13:55:41.943253 1815551 containerd.go:563] duration metric: took 1.54249613s to copy over tarball
	I1013 13:55:41.943351 1815551 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 13:55:43.492564 1815551 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.549175583s)
	I1013 13:55:43.492596 1815551 containerd.go:570] duration metric: took 1.549300388s to extract the tarball
	I1013 13:55:43.492604 1815551 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 13:55:43.534655 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:43.680421 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:43.727538 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.770225 1815551 retry.go:31] will retry after 129.297012ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T13:55:43Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1013 13:55:43.900675 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.942782 1815551 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 13:55:43.942818 1815551 cache_images.go:85] Images are preloaded, skipping loading
	I1013 13:55:43.942831 1815551 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.34.1 containerd true true} ...
	I1013 13:55:43.942973 1815551 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 13:55:43.943036 1815551 ssh_runner.go:195] Run: sudo crictl info
	I1013 13:55:43.983500 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:43.983527 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:43.983547 1815551 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 13:55:43.983572 1815551 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214022 NodeName:addons-214022 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 13:55:43.983683 1815551 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-214022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.214"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 13:55:43.983786 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 13:55:43.997492 1815551 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 13:55:43.997569 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 13:55:44.009940 1815551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1013 13:55:44.032456 1815551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 13:55:44.055201 1815551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1013 13:55:44.077991 1815551 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I1013 13:55:44.082755 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:44.102001 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:44.250454 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:55:44.271759 1815551 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022 for IP: 192.168.39.214
	I1013 13:55:44.271804 1815551 certs.go:195] generating shared ca certs ...
	I1013 13:55:44.271849 1815551 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.272058 1815551 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 13:55:44.515410 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt ...
	I1013 13:55:44.515443 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt: {Name:mk7e93844bf7a5315c584d29c143e2135009c9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515626 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key ...
	I1013 13:55:44.515639 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key: {Name:mk2370dd9470838be70f5ff73870ee78eaf49615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515736 1815551 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 13:55:44.688770 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt ...
	I1013 13:55:44.688804 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt: {Name:mk17069980c2ce75e576b93cf8d09a188d68e9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.688989 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key ...
	I1013 13:55:44.689002 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key: {Name:mk6b5175fc3e29304600d26ae322daa345a1af96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.689075 1815551 certs.go:257] generating profile certs ...
	I1013 13:55:44.689137 1815551 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key
	I1013 13:55:44.689163 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt with IP's: []
	I1013 13:55:45.249037 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt ...
	I1013 13:55:45.249073 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: {Name:mk280462c7f89663f3ca7afb3f0492dd2b0ee285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249251 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key ...
	I1013 13:55:45.249263 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key: {Name:mk559b21297b9d07a442f449010608571723a06a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249350 1815551 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114
	I1013 13:55:45.249370 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I1013 13:55:45.485539 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 ...
	I1013 13:55:45.485568 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114: {Name:mkd1f4b4fe453f9f52532a7d0522a77f6292f9c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485740 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 ...
	I1013 13:55:45.485755 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114: {Name:mk7e630cb0d73800acc236df973e9041d684cea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485833 1815551 certs.go:382] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt
	I1013 13:55:45.485922 1815551 certs.go:386] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key
	I1013 13:55:45.485980 1815551 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key
	I1013 13:55:45.485998 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt with IP's: []
	I1013 13:55:45.781914 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt ...
	I1013 13:55:45.781958 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt: {Name:mk2c046b91ab288417107efe4a8ee37eb796f0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782135 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key ...
	I1013 13:55:45.782151 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key: {Name:mk11ba110c07b71583dc1e7a37e3c7830733bcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782356 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 13:55:45.782394 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 13:55:45.782417 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 13:55:45.782439 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 13:55:45.783086 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 13:55:45.815352 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 13:55:45.846541 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 13:55:45.880232 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 13:55:45.924466 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 13:55:45.962160 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 13:55:45.999510 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 13:55:46.034971 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 13:55:46.068482 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 13:55:46.099803 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 13:55:46.121270 1815551 ssh_runner.go:195] Run: openssl version
	I1013 13:55:46.128266 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 13:55:46.142449 1815551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148226 1815551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148313 1815551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.155940 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 13:55:46.170023 1815551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 13:55:46.175480 1815551 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 13:55:46.175554 1815551 kubeadm.go:400] StartCluster: {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:46.175652 1815551 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 13:55:46.175759 1815551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 13:55:46.214377 1815551 cri.go:89] found id: ""
	I1013 13:55:46.214475 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 13:55:46.227534 1815551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 13:55:46.239809 1815551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 13:55:46.253443 1815551 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 13:55:46.253466 1815551 kubeadm.go:157] found existing configuration files:
	
	I1013 13:55:46.253514 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 13:55:46.265630 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 13:55:46.265706 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 13:55:46.278450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 13:55:46.290243 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 13:55:46.290303 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 13:55:46.303207 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.315748 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 13:55:46.315819 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.328450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 13:55:46.340422 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 13:55:46.340491 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 13:55:46.353088 1815551 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 13:55:46.409861 1815551 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 13:55:46.409939 1815551 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 13:55:46.510451 1815551 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 13:55:46.510548 1815551 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 13:55:46.510736 1815551 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 13:55:46.519844 1815551 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 13:55:46.532700 1815551 out.go:252]   - Generating certificates and keys ...
	I1013 13:55:46.532819 1815551 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 13:55:46.532896 1815551 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 13:55:46.783435 1815551 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 13:55:47.020350 1815551 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 13:55:47.775782 1815551 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 13:55:48.011804 1815551 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 13:55:48.461103 1815551 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 13:55:48.461301 1815551 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.750774 1815551 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 13:55:48.751132 1815551 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.831944 1815551 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 13:55:49.085300 1815551 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 13:55:49.215416 1815551 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 13:55:49.215485 1815551 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 13:55:49.341619 1815551 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 13:55:49.552784 1815551 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 13:55:49.595942 1815551 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 13:55:49.670226 1815551 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 13:55:49.887570 1815551 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 13:55:49.888048 1815551 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 13:55:49.890217 1815551 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 13:55:49.891956 1815551 out.go:252]   - Booting up control plane ...
	I1013 13:55:49.892075 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 13:55:49.892175 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 13:55:49.892283 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 13:55:49.915573 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 13:55:49.915702 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 13:55:49.926506 1815551 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 13:55:49.926635 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 13:55:49.926699 1815551 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 13:55:50.104649 1815551 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 13:55:50.104896 1815551 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 13:55:51.105517 1815551 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001950535s
	I1013 13:55:51.110678 1815551 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 13:55:51.110781 1815551 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.214:8443/livez
	I1013 13:55:51.110862 1815551 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 13:55:51.110934 1815551 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 13:55:53.698826 1815551 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.589717518s
	I1013 13:55:54.571486 1815551 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.462849107s
	I1013 13:55:56.609645 1815551 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502421023s
	I1013 13:55:56.625086 1815551 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 13:55:56.642185 1815551 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 13:55:56.660063 1815551 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 13:55:56.660353 1815551 kubeadm.go:318] [mark-control-plane] Marking the node addons-214022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 13:55:56.677664 1815551 kubeadm.go:318] [bootstrap-token] Using token: yho7iw.8cmp1omdihpr13ia
	I1013 13:55:56.680503 1815551 out.go:252]   - Configuring RBAC rules ...
	I1013 13:55:56.680644 1815551 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 13:55:56.691921 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 13:55:56.701832 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 13:55:56.706581 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 13:55:56.711586 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 13:55:56.720960 1815551 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 13:55:57.019012 1815551 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 13:55:57.510749 1815551 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 13:55:58.017894 1815551 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 13:55:58.019641 1815551 kubeadm.go:318] 
	I1013 13:55:58.019746 1815551 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 13:55:58.019759 1815551 kubeadm.go:318] 
	I1013 13:55:58.019856 1815551 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 13:55:58.019866 1815551 kubeadm.go:318] 
	I1013 13:55:58.019906 1815551 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 13:55:58.019991 1815551 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 13:55:58.020075 1815551 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 13:55:58.020087 1815551 kubeadm.go:318] 
	I1013 13:55:58.020135 1815551 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 13:55:58.020180 1815551 kubeadm.go:318] 
	I1013 13:55:58.020272 1815551 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 13:55:58.020284 1815551 kubeadm.go:318] 
	I1013 13:55:58.020355 1815551 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 13:55:58.020465 1815551 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 13:55:58.020560 1815551 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 13:55:58.020570 1815551 kubeadm.go:318] 
	I1013 13:55:58.020696 1815551 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 13:55:58.020841 1815551 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 13:55:58.020863 1815551 kubeadm.go:318] 
	I1013 13:55:58.021022 1815551 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021178 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa \
	I1013 13:55:58.021220 1815551 kubeadm.go:318] 	--control-plane 
	I1013 13:55:58.021227 1815551 kubeadm.go:318] 
	I1013 13:55:58.021356 1815551 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 13:55:58.021366 1815551 kubeadm.go:318] 
	I1013 13:55:58.021480 1815551 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021613 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa 
	I1013 13:55:58.023899 1815551 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 13:55:58.023930 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:58.023940 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:58.026381 1815551 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 13:55:58.028311 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 13:55:58.043778 1815551 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 13:55:58.076261 1815551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 13:55:58.076355 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.076389 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214022 minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=addons-214022 minikube.k8s.io/primary=true
	I1013 13:55:58.125421 1815551 ops.go:34] apiserver oom_adj: -16
	I1013 13:55:58.249972 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.750645 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.250461 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.750623 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.250758 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.750903 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.250112 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.750238 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.250999 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.377634 1815551 kubeadm.go:1113] duration metric: took 4.301363742s to wait for elevateKubeSystemPrivileges
	I1013 13:56:02.377670 1815551 kubeadm.go:402] duration metric: took 16.202122758s to StartCluster
	I1013 13:56:02.377691 1815551 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.377852 1815551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:56:02.378374 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.378641 1815551 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:56:02.378701 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 13:56:02.378727 1815551 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 13:56:02.378856 1815551 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214022"
	I1013 13:56:02.378871 1815551 addons.go:69] Setting yakd=true in profile "addons-214022"
	I1013 13:56:02.378888 1815551 addons.go:238] Setting addon yakd=true in "addons-214022"
	I1013 13:56:02.378915 1815551 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:02.378924 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378926 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.378954 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378945 1815551 addons.go:69] Setting default-storageclass=true in profile "addons-214022"
	I1013 13:56:02.378942 1815551 addons.go:69] Setting gcp-auth=true in profile "addons-214022"
	I1013 13:56:02.378975 1815551 addons.go:69] Setting cloud-spanner=true in profile "addons-214022"
	I1013 13:56:02.378978 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214022"
	I1013 13:56:02.378963 1815551 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.378988 1815551 mustload.go:65] Loading cluster: addons-214022
	I1013 13:56:02.378999 1815551 addons.go:69] Setting registry=true in profile "addons-214022"
	I1013 13:56:02.379046 1815551 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214022"
	I1013 13:56:02.379058 1815551 addons.go:238] Setting addon registry=true in "addons-214022"
	I1013 13:56:02.379079 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379103 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379214 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.379427 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.378987 1815551 addons.go:238] Setting addon cloud-spanner=true in "addons-214022"
	I1013 13:56:02.379425 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379478 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379483 1815551 addons.go:69] Setting storage-provisioner=true in profile "addons-214022"
	I1013 13:56:02.379488 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379497 1815551 addons.go:238] Setting addon storage-provisioner=true in "addons-214022"
	I1013 13:56:02.379503 1815551 addons.go:69] Setting ingress=true in profile "addons-214022"
	I1013 13:56:02.379519 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379522 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379532 1815551 addons.go:69] Setting ingress-dns=true in profile "addons-214022"
	I1013 13:56:02.379546 1815551 addons.go:238] Setting addon ingress-dns=true in "addons-214022"
	I1013 13:56:02.379575 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379616 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379653 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379682 1815551 addons.go:69] Setting volumesnapshots=true in profile "addons-214022"
	I1013 13:56:02.379814 1815551 addons.go:238] Setting addon volumesnapshots=true in "addons-214022"
	I1013 13:56:02.379879 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379926 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379490 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379965 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379979 1815551 addons.go:69] Setting metrics-server=true in profile "addons-214022"
	I1013 13:56:02.379992 1815551 addons.go:238] Setting addon metrics-server=true in "addons-214022"
	I1013 13:56:02.380013 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379520 1815551 addons.go:238] Setting addon ingress=true in "addons-214022"
	I1013 13:56:02.379924 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380064 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380076 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380107 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380112 1815551 addons.go:69] Setting inspektor-gadget=true in profile "addons-214022"
	I1013 13:56:02.380125 1815551 addons.go:238] Setting addon inspektor-gadget=true in "addons-214022"
	I1013 13:56:02.380158 1815551 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.380221 1815551 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214022"
	I1013 13:56:02.380272 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380445 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380510 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379699 1815551 addons.go:69] Setting volcano=true in profile "addons-214022"
	I1013 13:56:02.380559 1815551 addons.go:238] Setting addon volcano=true in "addons-214022"
	I1013 13:56:02.380613 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380634 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380666 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380790 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380832 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380876 1815551 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214022"
	I1013 13:56:02.380894 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214022"
	I1013 13:56:02.379472 1815551 addons.go:69] Setting registry-creds=true in profile "addons-214022"
	I1013 13:56:02.381003 1815551 addons.go:238] Setting addon registry-creds=true in "addons-214022"
	I1013 13:56:02.381112 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.381265 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.381293 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.381341 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.382020 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.382057 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.382817 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.383259 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.383291 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384195 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384256 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384286 1815551 out.go:179] * Verifying Kubernetes components...
	I1013 13:56:02.384291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.384732 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384782 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.387093 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:56:02.392106 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.392163 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.396083 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.396162 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.410131 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I1013 13:56:02.411431 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1013 13:56:02.412218 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.412918 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.412942 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.413748 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.414498 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.415229 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.415286 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.415822 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.415843 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.420030 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I1013 13:56:02.420041 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I1013 13:56:02.420259 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I1013 13:56:02.420298 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I1013 13:56:02.420346 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.420406 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I1013 13:56:02.420930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421041 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421071 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.421170 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421581 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421600 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421753 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421769 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421819 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421832 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.422190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422264 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422931 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.422976 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.423789 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.424161 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.424211 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.427224 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I1013 13:56:02.427390 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38041
	I1013 13:56:02.427782 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.427837 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428131 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.428460 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.428533 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.428569 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428840 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.429601 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429621 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.429774 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429786 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.430349 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430508 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.430777 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430880 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431609 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.431937 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.431967 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431989 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.432062 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432169 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432395 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.432603 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.432771 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.433653 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.433706 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.433998 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.434042 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.434547 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32821
	I1013 13:56:02.441970 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1013 13:56:02.442071 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I1013 13:56:02.442458 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.442810 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.443536 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443557 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.443796 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443813 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.444423 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.444487 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.445199 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.445303 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.445921 1815551 addons.go:238] Setting addon default-storageclass=true in "addons-214022"
	I1013 13:56:02.445974 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.446387 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.446430 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.447853 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1013 13:56:02.447930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448413 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448652 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.448673 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449317 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.449355 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449911 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450071 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450759 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.450802 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.452824 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1013 13:56:02.453268 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.453309 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.453388 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.453909 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.453944 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.454377 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.454945 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.455002 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.457726 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41047
	I1013 13:56:02.458946 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42105
	I1013 13:56:02.459841 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.460456 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.460471 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.460997 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.461059 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.461190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.461893 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.462087 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.463029 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I1013 13:56:02.463622 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.464283 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.464301 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.467881 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.468766 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1013 13:56:02.468880 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.470158 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.470767 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.470785 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.471160 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
	I1013 13:56:02.471380 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.471463 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.471745 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.472514 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I1013 13:56:02.474011 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.474407 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.475349 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.475371 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.475936 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.477228 1815551 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214022"
	I1013 13:56:02.477291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.477704 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.477781 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.478540 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.478577 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.479296 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.479320 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.479338 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 13:56:02.479831 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.481287 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.482030 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.482191 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 13:56:02.482988 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I1013 13:56:02.482206 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.483218 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.483796 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.484400 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.484415 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.485053 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485131 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485219 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 13:56:02.485513 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.485624 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.488111 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 13:56:02.489703 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 13:56:02.490084 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1013 13:56:02.490663 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.490763 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.491660 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I1013 13:56:02.491817 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492275 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.492498 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.492417 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.492699 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492943 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 13:56:02.493252 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.493468 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.493280 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1013 13:56:02.494093 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.494695 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.495079 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.495408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.497771 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 13:56:02.498011 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.499118 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 13:56:02.499863 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35551
	I1013 13:56:02.500453 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.500464 1815551 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 13:56:02.500482 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.501046 1815551 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:02.501426 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 13:56:02.501453 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502344 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 13:56:02.502360 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 13:56:02.502380 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502511 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:02.502523 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 13:56:02.502539 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502551 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.503704 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 13:56:02.504519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.504549 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.504971 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1013 13:56:02.505044 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34753
	I1013 13:56:02.505476 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.505935 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.506132 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.506402 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 13:56:02.506420 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 13:56:02.506441 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.507553 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.507571 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.510588 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1013 13:56:02.511014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.512055 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.513064 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I1013 13:56:02.513461 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I1013 13:56:02.513806 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I1013 13:56:02.514065 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514237 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46643
	I1013 13:56:02.514353 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514506 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.514833 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.515238 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.515280 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.515776 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.516060 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516139 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516152 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516158 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516229 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I1013 13:56:02.516543 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.516614 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.516690 1815551 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 13:56:02.517007 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.517014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517062 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517467 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.517483 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.517559 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.517562 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
	I1013 13:56:02.518311 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:02.518369 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 13:56:02.518393 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.518516 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.518540 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.518655 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519402 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519519 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.519628 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.519763 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.519831 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521182 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.521199 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1013 13:56:02.521204 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521239 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521254 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.521455 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521645 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.521859 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.522155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.522227 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.525058 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.526886 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.526989 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.527062 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.527172 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.527481 1815551 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:02.527499 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1013 13:56:02.527538 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.527916 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528591 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.530285 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530450 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528734 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530629 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.530633 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528801 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528997 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529220 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I1013 13:56:02.529385 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529699 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.530894 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.530917 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.531013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.529988 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.531257 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531828 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.532069 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.532264 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.532540 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.532554 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531749 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.533563 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 13:56:02.533622 1815551 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 13:56:02.533679 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535465 1815551 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 13:56:02.533809 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I1013 13:56:02.533885 1815551 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 13:56:02.533999 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.534123 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.534155 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535733 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.535024 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.536159 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.536202 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.536302 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.537059 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.537168 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537279 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I1013 13:56:02.537305 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 13:56:02.537322 1815551 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 13:56:02.537342 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.537456 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.537805 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537934 1815551 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:02.537945 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 13:56:02.537970 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538046 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 13:56:02.538056 1815551 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 13:56:02.538070 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538169 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.538186 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.538982 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:02.539022 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 13:56:02.539053 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.540639 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.541678 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.541498 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.541528 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.542401 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.542692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.541543 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.542639 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.542646 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.542566 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.543500 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.544260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.545374 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.545773 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.546359 1815551 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 13:56:02.546363 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 13:56:02.546634 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.546830 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I1013 13:56:02.547953 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.547975 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.548147 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.548267 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.548438 1815551 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:02.548451 1815551 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 13:56:02.548473 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548649 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 13:56:02.548665 1815551 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 13:56:02.548684 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548741 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.548751 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.548789 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 13:56:02.549764 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549774 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.549766 1815551 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 13:56:02.549808 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.549138 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549891 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549914 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549939 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.550519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.550541 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.550650 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551438 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551458 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.551469 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.551478 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551613 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551695 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.551911 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.551979 1815551 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 13:56:02.552033 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552921 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.552947 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.552922 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.552965 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.553027 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553037 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.553282 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.553338 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553396 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.553415 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553448 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553810 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.554101 1815551 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:02.554108 1815551 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 13:56:02.554116 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 13:56:02.554188 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.555708 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:02.555861 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 13:56:02.555886 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555860 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.555999 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.556383 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.556783 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.557013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.557193 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.558058 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.558134 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.559028 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.559068 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.559315 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.559492 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.559902 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.560012 1815551 out.go:179]   - Using image docker.io/busybox:stable
	I1013 13:56:02.560174 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.560282 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560454 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560952 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561186 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561489 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561738 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561760 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561891 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561942 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562049 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562133 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562208 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562304 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.562325 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.562663 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.562854 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.563028 1815551 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 13:56:02.563073 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.563249 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.564627 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:02.564650 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 13:56:02.564672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.568502 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569018 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.569056 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569235 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.569424 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.569582 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.569725 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:03.342481 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:56:03.342511 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 13:56:03.415927 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:03.502503 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:03.509312 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:03.553702 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 13:56:03.553739 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 13:56:03.554436 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 13:56:03.554458 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 13:56:03.558285 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 13:56:03.558305 1815551 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 13:56:03.648494 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:03.699103 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:03.779563 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:03.812678 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 13:56:03.812733 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 13:56:03.829504 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:03.832700 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:03.897242 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 13:56:03.897268 1815551 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 13:56:03.905550 1815551 node_ready.go:35] waiting up to 6m0s for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909125 1815551 node_ready.go:49] node "addons-214022" is "Ready"
	I1013 13:56:03.909165 1815551 node_ready.go:38] duration metric: took 3.564505ms for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909180 1815551 api_server.go:52] waiting for apiserver process to appear ...
	I1013 13:56:03.909241 1815551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 13:56:03.957336 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:04.136232 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:04.201240 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 13:56:04.201271 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 13:56:04.228704 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 13:56:04.228758 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 13:56:04.287683 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.287738 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 13:56:04.507887 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:04.507919 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 13:56:04.641317 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 13:56:04.641349 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 13:56:04.710332 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 13:56:04.710378 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 13:56:04.712723 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 13:56:04.712755 1815551 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 13:56:04.822157 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.887676 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:04.887707 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 13:56:04.968928 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:05.069666 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 13:56:05.069709 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 13:56:05.164254 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 13:56:05.164289 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 13:56:05.171441 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 13:56:05.171470 1815551 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 13:56:05.278956 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:05.595927 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 13:56:05.595960 1815551 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 13:56:05.703182 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 13:56:05.703221 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 13:56:05.763510 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:05.763544 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 13:56:06.065261 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:06.086528 1815551 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.086558 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 13:56:06.241763 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 13:56:06.241791 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 13:56:06.468347 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.948294 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 13:56:06.948335 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 13:56:07.247516 1815551 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.904962804s)
	I1013 13:56:07.247565 1815551 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1013 13:56:07.247597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.83162272s)
	I1013 13:56:07.247662 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.247685 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248180 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248198 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.248211 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.248221 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248546 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:07.248628 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248648 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.509546 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 13:56:07.509581 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 13:56:07.797697 1815551 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214022" context rescaled to 1 replicas
	I1013 13:56:08.114046 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 13:56:08.114078 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 13:56:08.819818 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:08.819848 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 13:56:08.894448 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:09.954565 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 13:56:09.954611 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:09.959281 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.959849 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:09.959886 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.960116 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:09.960364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:09.960569 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:09.960746 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:10.901573 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 13:56:11.367882 1815551 addons.go:238] Setting addon gcp-auth=true in "addons-214022"
	I1013 13:56:11.367958 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:11.368474 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.368530 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.384151 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I1013 13:56:11.384793 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.385376 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.385403 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.385815 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.386578 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.386622 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.401901 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I1013 13:56:11.402499 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.403178 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.403201 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.403629 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.403840 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:11.405902 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:11.406201 1815551 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 13:56:11.406233 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:11.409331 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409779 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:11.409810 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409983 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:11.410205 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:11.410408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:11.410637 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:13.559421 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.0568709s)
	I1013 13:56:13.559481 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559478 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (10.050128857s)
	I1013 13:56:13.559507 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.910967928s)
	I1013 13:56:13.559530 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559544 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559553 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.860416384s)
	I1013 13:56:13.559562 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559571 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559579 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559619 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.780022659s)
	I1013 13:56:13.559648 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559663 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559691 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.726948092s)
	I1013 13:56:13.559546 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559707 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559728 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559764 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.730231108s)
	I1013 13:56:13.559493 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559784 1815551 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (9.650528788s)
	I1013 13:56:13.559801 1815551 api_server.go:72] duration metric: took 11.181129031s to wait for apiserver process to appear ...
	I1013 13:56:13.559808 1815551 api_server.go:88] waiting for apiserver healthz status ...
	I1013 13:56:13.559830 1815551 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I1013 13:56:13.559992 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560020 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560048 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560055 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560063 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560071 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560072 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560083 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560090 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560098 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559785 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560331 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560332 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560338 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560345 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560391 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560394 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560400 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560407 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560410 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560412 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560425 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560447 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560450 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560456 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560461 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560464 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560467 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560491 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560508 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560613 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560624 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560903 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560967 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560976 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560987 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560995 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.561056 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561078 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561085 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561188 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561210 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561237 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561243 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561445 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561462 1815551 addons.go:479] Verifying addon ingress=true in "addons-214022"
	I1013 13:56:13.561689 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561732 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561739 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563431 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.563516 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563493 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.564138 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.564155 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.564164 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.566500 1815551 out.go:179] * Verifying ingress addon...
	I1013 13:56:13.568872 1815551 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 13:56:13.679959 1815551 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I1013 13:56:13.701133 1815551 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 13:56:13.701173 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:13.713292 1815551 api_server.go:141] control plane version: v1.34.1
	I1013 13:56:13.713342 1815551 api_server.go:131] duration metric: took 153.525188ms to wait for apiserver health ...
	I1013 13:56:13.713357 1815551 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 13:56:13.839550 1815551 system_pods.go:59] 15 kube-system pods found
	I1013 13:56:13.839596 1815551 system_pods.go:61] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:13.839608 1815551 system_pods.go:61] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839614 1815551 system_pods.go:61] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839621 1815551 system_pods.go:61] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:13.839626 1815551 system_pods.go:61] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:13.839631 1815551 system_pods.go:61] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:13.839643 1815551 system_pods.go:61] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:13.839649 1815551 system_pods.go:61] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:13.839655 1815551 system_pods.go:61] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:13.839662 1815551 system_pods.go:61] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:13.839676 1815551 system_pods.go:61] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:13.839684 1815551 system_pods.go:61] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:13.839690 1815551 system_pods.go:61] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:13.839698 1815551 system_pods.go:61] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:13.839701 1815551 system_pods.go:61] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:13.839708 1815551 system_pods.go:74] duration metric: took 126.345191ms to wait for pod list to return data ...
	I1013 13:56:13.839738 1815551 default_sa.go:34] waiting for default service account to be created ...
	I1013 13:56:13.942067 1815551 default_sa.go:45] found service account: "default"
	I1013 13:56:13.942106 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.942111 1815551 default_sa.go:55] duration metric: took 102.363552ms for default service account to be created ...
	I1013 13:56:13.942129 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.942130 1815551 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 13:56:13.942465 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.942473 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.942485 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:14.047220 1815551 system_pods.go:86] 15 kube-system pods found
	I1013 13:56:14.047259 1815551 system_pods.go:89] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:14.047272 1815551 system_pods.go:89] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047280 1815551 system_pods.go:89] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047291 1815551 system_pods.go:89] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:14.047297 1815551 system_pods.go:89] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:14.047303 1815551 system_pods.go:89] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:14.047311 1815551 system_pods.go:89] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:14.047316 1815551 system_pods.go:89] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:14.047323 1815551 system_pods.go:89] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:14.047333 1815551 system_pods.go:89] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:14.047343 1815551 system_pods.go:89] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:14.047360 1815551 system_pods.go:89] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:14.047368 1815551 system_pods.go:89] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:14.047377 1815551 system_pods.go:89] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:14.047386 1815551 system_pods.go:89] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:14.047403 1815551 system_pods.go:126] duration metric: took 105.264628ms to wait for k8s-apps to be running ...
	I1013 13:56:14.047417 1815551 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 13:56:14.047478 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 13:56:14.113581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:14.930679 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.130040 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.620233 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.296801 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.658297 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.084581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.640914 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.131818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.760793 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.821597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.86421149s)
	I1013 13:56:18.821631 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.685366971s)
	I1013 13:56:18.821668 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821748 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821787 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821872 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.9996555s)
	W1013 13:56:18.821914 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821934 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.852967871s)
	I1013 13:56:18.821959 1815551 retry.go:31] will retry after 212.802499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821975 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821989 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822111 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.543120613s)
	I1013 13:56:18.822130 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822146 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822157 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822250 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822256 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822259 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822273 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822291 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822289 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822274 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.756980139s)
	I1013 13:56:18.822314 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822260 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822299 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822334 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822345 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822325 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822357 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822331 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822386 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822394 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.354009404s)
	W1013 13:56:18.822426 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822447 1815551 retry.go:31] will retry after 341.080561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822631 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822646 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822660 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822666 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822674 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822684 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822691 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822702 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822726 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822801 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822818 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822890 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.928381136s)
	I1013 13:56:18.822936 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822947 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823037 1815551 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.416805726s)
	I1013 13:56:18.822701 1815551 addons.go:479] Verifying addon registry=true in "addons-214022"
	I1013 13:56:18.823408 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823442 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823449 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823457 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.823463 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823529 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823549 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823554 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823563 1815551 addons.go:479] Verifying addon metrics-server=true in "addons-214022"
	I1013 13:56:18.823922 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823939 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823978 1815551 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.776478568s)
	I1013 13:56:18.826440 1815551 system_svc.go:56] duration metric: took 4.779015598s WaitForService to wait for kubelet
	I1013 13:56:18.826457 1815551 kubeadm.go:586] duration metric: took 16.447782815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:56:18.826480 1815551 node_conditions.go:102] verifying NodePressure condition ...
	I1013 13:56:18.824018 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.824271 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.826526 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.826549 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.826556 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.826909 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:18.827041 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.827056 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.827324 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.827349 1815551 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:18.827631 1815551 out.go:179] * Verifying registry addon...
	I1013 13:56:18.827639 1815551 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214022 service yakd-dashboard -n yakd-dashboard
	
	I1013 13:56:18.828579 1815551 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 13:56:18.830389 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 13:56:18.830649 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 13:56:18.831072 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 13:56:18.831622 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 13:56:18.831641 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 13:56:18.904373 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 13:56:18.904404 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 13:56:18.958203 1815551 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 13:56:18.958240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:18.968879 1815551 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 13:56:18.968905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:18.980574 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:18.980605 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 13:56:18.989659 1815551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 13:56:18.989692 1815551 node_conditions.go:123] node cpu capacity is 2
	I1013 13:56:18.989704 1815551 node_conditions.go:105] duration metric: took 163.213438ms to run NodePressure ...
	I1013 13:56:18.989726 1815551 start.go:241] waiting for startup goroutines ...
	I1013 13:56:19.035462 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:19.044517 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:19.044541 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:19.044887 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:19.044920 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:19.044937 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:19.076791 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:19.115345 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.164325 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:19.492227 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.492514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:19.578775 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.860209 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.860435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.075338 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.338880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.339590 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.591872 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.839272 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.840410 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.147212 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.341334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:21.342792 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.576751 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.816476 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.780960002s)
	W1013 13:56:21.816548 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816583 1815551 retry.go:31] will retry after 241.635364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816594 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.739753765s)
	I1013 13:56:21.816659 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.816682 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.652313132s)
	I1013 13:56:21.816724 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816742 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817049 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817064 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817072 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817094 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817135 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817206 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817222 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817231 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817240 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817331 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817362 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817373 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817637 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.820100 1815551 addons.go:479] Verifying addon gcp-auth=true in "addons-214022"
	I1013 13:56:21.822251 1815551 out.go:179] * Verifying gcp-auth addon...
	I1013 13:56:21.824621 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 13:56:21.835001 1815551 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 13:56:21.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:21.838795 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.840850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.059249 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:22.077627 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.330307 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.336339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.337042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:22.574406 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.832108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.838566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.838826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 13:56:22.914754 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:22.914802 1815551 retry.go:31] will retry after 760.892054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:23.073359 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.329443 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.336062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:23.336518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.576107 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.676911 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:23.852063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.852111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.852394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.075386 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:24.331600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.340818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:24.343374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.572818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:24.620054 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.620094 1815551 retry.go:31] will retry after 1.157322101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.831852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.836023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.836880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.073842 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.328390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.335179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:25.337258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.650194 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.777621 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:25.840280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.846148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.847000 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.073966 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:26.329927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.335473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.335806 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.575967 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:26.717807 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.717838 1815551 retry.go:31] will retry after 1.353453559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.828801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.834019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.836503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.073185 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.329339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.337730 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.338165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.576514 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.828768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.835828 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.836163 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.071440 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:28.372264 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.372321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.373313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:28.374357 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.576799 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.830178 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.839906 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.841861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 13:56:29.026067 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.026119 1815551 retry.go:31] will retry after 2.314368666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.075636 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.331372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.334421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:29.336311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.574567 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.828489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.836190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.836214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.073854 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.328358 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.337153 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:30.572800 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.829360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.836930 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.838278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.115447 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.341310 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:31.386485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.389205 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:31.390131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.594587 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.838151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.859495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.859525 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.074372 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.329175 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.337700 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.340721 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.450731 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109365647s)
	W1013 13:56:32.450775 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.450795 1815551 retry.go:31] will retry after 3.150290355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.578006 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.830600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.835361 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.837984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.072132 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.330611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.336957 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.338768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:33.576304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.832311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.837282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.839687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.073260 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.328435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.335455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.338454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:34.573208 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.829194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.836540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.838519 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.073549 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.329626 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:35.336677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.573553 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.601692 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:35.833491 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.847288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.853015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.073279 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.332575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.339486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.345783 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.575174 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.831613 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.838390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.839346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.873620 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.271867515s)
	W1013 13:56:36.873678 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:36.873707 1815551 retry.go:31] will retry after 2.895058592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:37.073691 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.328849 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.335191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.337850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:37.572952 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.830399 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.834346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.835091 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.074246 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.329068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.334746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:38.336761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.574900 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.829389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.836693 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.838345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.073278 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.329302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.339598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.340006 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:39.572295 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.769464 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:39.829653 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.836342 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.836508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.073770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.329739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.334329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.336269 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.691416 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.831148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.837541 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.839843 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.983908 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.214399822s)
	W1013 13:56:40.983958 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:40.983985 1815551 retry.go:31] will retry after 7.225185704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:41.073163 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.329997 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.335409 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.338433 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:41.666422 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.829493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.835176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.835834 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.072985 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.330254 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.339275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.343430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.574234 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.831039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.835619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.838197 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.072757 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.328191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.337547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.337556 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.573563 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.840684 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.842458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.848748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.073791 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.328352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.335902 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.337655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:44.575764 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.834421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.839189 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.844388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.073743 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.328774 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.336100 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:45.336438 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.601555 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.830165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.835830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.838487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.074421 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.328961 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.334499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.335387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:46.574665 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.829543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.835535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.837472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.076871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.328763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.335050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:47.337454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.572647 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.829879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.834618 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.837273 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.082833 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.210068 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:48.329748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.336813 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.339418 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.577288 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.957818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.960308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.964374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.076388 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.310522 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.100404712s)
	W1013 13:56:49.310569 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.310590 1815551 retry.go:31] will retry after 8.278511579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.333318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.335452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.338043 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.577394 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.830452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.835251 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.837381 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.073417 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.329558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:50.339077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.574733 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.830760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.835530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.077542 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.331547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.335448 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:51.336576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.572984 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.829083 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.837328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.072950 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.329542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.335485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.335539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.572971 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.828509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.836901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.837310 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.074048 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.333265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.335372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.336434 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.574864 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.830933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.838072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.839851 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.074866 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.338983 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.339799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:54.344377 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.574702 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.828114 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.837122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.074420 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:55.329544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:55.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.336305 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:55.578331 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.005987 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.006040 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.008625 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.083827 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.328560 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.335079 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.335136 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.575579 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.830373 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.835033 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.835179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.087195 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.332845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.337372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.338029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.576538 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.589639 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:57.830334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.836937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.838662 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.112247 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.336059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.348974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.350146 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.573280 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.842857 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.842873 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.842888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.924998 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.335308989s)
	W1013 13:56:58.925066 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:58.925097 1815551 retry.go:31] will retry after 13.924020767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:59.072616 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.329181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.335127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.335993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:59.575343 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.830551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.836400 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.837278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.078387 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.333707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.375230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:00.376823 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.572444 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.829334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.835575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.835799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.079304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.330385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.335250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.581487 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.837221 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.837449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.078263 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:02.330056 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:02.339092 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.339093 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:02.577091 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.077029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.077446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.077527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.154987 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.328809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.335973 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.336466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.574053 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.832304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.836898 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.837250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.072871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.329704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.335648 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:04.573740 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.828297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.838545 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.839359 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.073273 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.331167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.337263 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:05.339875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.572747 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.831331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.842003 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.930357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.076706 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.328910 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.336063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.343356 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:06.584114 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.830148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.835936 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.837800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.073829 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.332895 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.335938 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:07.336485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.573658 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.829535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.834609 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.841665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.077534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.328984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.333490 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.335036 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.574315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.830309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.838864 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.075894 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.330037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.335138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.336913 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:09.572525 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.828315 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.835125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.835169 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.074415 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.330449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.334152 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.338372 1815551 kapi.go:107] duration metric: took 51.507291615s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 13:57:10.573600 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.829312 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.834624 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.073690 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.329540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.334164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.575859 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.829406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.834682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.073929 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.328430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.335019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.574762 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.828887 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.833318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.849353 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:13.075935 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:13.329099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.336236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:13.573534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:57:13.587679 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.587745 1815551 retry.go:31] will retry after 13.672716628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.828261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.835435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.073229 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.328789 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.334388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.573428 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.829403 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.834752 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.074458 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.330167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.334526 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.573869 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.828247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.834508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.073598 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.329584 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.335058 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.573770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.834668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.073034 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.330112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.334151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.572834 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.827923 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.834428 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.074227 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.332800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.338122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.574366 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.829944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.835390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.073063 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.330933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.334816 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.578792 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.829059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.834174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.073867 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.328553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.335769 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.577315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.828820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.834111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.074340 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.348186 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.348277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.577133 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.828486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.835130 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.074094 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.329573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.336976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.576302 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.829112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.073276 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.332360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.574812 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.828888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.836976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.073895 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:24.329298 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.345232 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.573291 1815551 kapi.go:107] duration metric: took 1m11.00441945s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 13:57:24.829727 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.328687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.335809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.830863 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.833805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.829658 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.834781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.261314 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:27.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.335935 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.840969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.841226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.331295 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.336284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.567555 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.306188084s)
	W1013 13:57:28.567634 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:28.567738 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.567757 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568060 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568121 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568134 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:57:28.568150 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.568163 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568426 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568464 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568475 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	W1013 13:57:28.568614 1815551 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 13:57:28.828678 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.834833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.329605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:29.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.829667 1815551 kapi.go:107] duration metric: took 1m8.005042215s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 13:57:29.831603 1815551 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214022 cluster.
	I1013 13:57:29.832969 1815551 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 13:57:29.834368 1815551 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 13:57:29.835165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.335102 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.834820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.337927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.836162 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.334652 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.335329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.836940 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.335265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.835299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.334493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.835958 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.336901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.836037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.334865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.335331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.835376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.334760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.835451 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.335213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.835487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.334559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.835709 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.336510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.835078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.334427 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.835800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.836213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.835870 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.336474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.835258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.335636 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.335125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.835336 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.334300 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.834511 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.334734 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.336483 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.835357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.334098 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.834039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.336018 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.836261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.334061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.834919 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.334649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.835154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.336354 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.834937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.335025 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.835808 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.835220 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.335287 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.336327 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.836514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.835391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.335754 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.834954 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.337125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.836950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.335741 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.835238 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.334514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.836800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.335199 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.834223 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.334374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.834313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.335017 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.836739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.836138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.837760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.335601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.834423 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.335277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.334190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.835779 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.335566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.834803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.335076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.834352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.337145 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.836318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.335627 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.834879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.335150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.834450 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.335022 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.836226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.335160 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.836271 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.335103 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.335568 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.836839 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.335318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.836164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.334826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.835127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.336865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.836135 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.336673 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.835150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.334589 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.834578 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.335334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.835296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.335639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.836101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.334964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.835761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.335325 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.836391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.335041 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.836020 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.335603 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.834446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.336822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.835728 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.834134 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.335154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.836561 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.336212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.835791 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.335558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.835276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.335841 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.836019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.334744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.834701 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.335446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.835594 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.337105 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.834479 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.335535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.835194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.335256 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.336078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.835454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.335291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.835631 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.336375 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.835517 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.335533 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.835668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.334675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.836765 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.335738 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.835614 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.334992 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.834761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.835039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.335024 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.835393 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.335510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.834835 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.335247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.835193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.337646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.334671 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.835950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.335072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.835262 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.336068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.838250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.336473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.834196 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.835516 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.336890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.336117 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.835027 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.336076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.835382 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.334500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.835763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.335780 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.834829 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.335922 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.835807 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.335268 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.835042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.334861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.835742 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.835542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.336308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.834819 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.334458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.834430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.334302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.834698 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.335242 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.837355 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.334901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.835822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.335481 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.835077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.335379 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.835858 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.335030 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.334406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.835970 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.336845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.835639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.334566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.834610 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.335758 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.834181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.335230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.836521 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.335115 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.834296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.334011 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.835572 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.334655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.837467 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.334547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.835937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.335478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.334801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.834872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.834089 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.334927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.835775 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.334557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.834110 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.336120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.835608 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.338054 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.835852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.335214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.835500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.334478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.835206 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.335016 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.835509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.334080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.835482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.336619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.835408 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.334489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.834778 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.334764 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.836472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.834969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.335466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.336616 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.835557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.335389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.837280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.335407 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.835989 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.334416 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.336883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.835437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.334771 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.836376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.334601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.334699 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.834770 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.334874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.835696 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.335335 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.836061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.334551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.836309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.335167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.835702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.334763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.335505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.835798 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.335506 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.836329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.335321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.834801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.334908 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.835943 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.335962 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.836396 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.335654 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.835633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.335803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.835579 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.334633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.335151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.835600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.336050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.835564 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.335649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.335190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.834455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.334544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.835370 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.834672 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.334781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.834666 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.835748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.335284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.835158 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.337417 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.835644 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.335243 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.835634 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.335832 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.836076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.336097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.835499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.334133 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.334598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.835174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.335615 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.835346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.334875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.835362 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.335392 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.835890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.336384 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.835565 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.334702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.836069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.335345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.835340 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.338240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.836180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.336383 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.835503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.836328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.333988 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.335216 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.836465 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.836108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.336180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.836086 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.335099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.836475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.334621 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.334707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.835907 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.336386 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.834665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.334390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.333981 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.836628 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.335276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.334588 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.835824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.338905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.836639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.335704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.835552 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.334682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.335635 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.835001 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.334830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.834874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.336549 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.838494 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.335810 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.834944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.335374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.834675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.335833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.836291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.334291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.835818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.335302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.836497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.334553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.834695 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.335580 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.336475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.834974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.335889 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.835181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.336380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.336442 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.834531 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.335397 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.834456 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.337231 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.834677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.335412 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.336539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.835527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.835688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.335233 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.335877 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.836559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.335297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.837219 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.336121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.834649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.336482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.335108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.335574 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.335903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.835661 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.337729 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.835944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.834840 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.336497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.835735 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.336414 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.835122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.335039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.835080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.336069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.335177 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.835351 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.335126 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.835180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.835406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.334198 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.336224 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.836107 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.336440 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.336101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.835094 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.334705 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.836586 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.335865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.336836 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.836236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.334530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.836132 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.834953 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.336330 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.834343 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.334470 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.835865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.336394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.834746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.336193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.835282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.334495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.835755 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.835573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.335010 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.835070 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.337081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.836917 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.336075 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.836303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.335543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.336304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.835123 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.334303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.836073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.337121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.335474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.835147 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.834679 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.334975 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.835505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.335547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.834320 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.337072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.835338 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.334677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.835088 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.334605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.834688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.835956 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.336504 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.836995 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.335212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.834385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.335476 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.835502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.836012 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.335744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.834380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.335240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.835337 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.335893 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.834524 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.334081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.835413 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.334814 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.834505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.335015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.835005 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.336275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.835387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.335267 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.835234 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.335689 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.336968 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.835611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.835927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.337411 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.834441 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.335062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.835993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.336191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.831884 1815551 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1013 14:02:18.831927 1815551 kapi.go:107] duration metric: took 6m0.001279478s to wait for kubernetes.io/minikube-addons=registry ...
	W1013 14:02:18.832048 1815551 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1013 14:02:18.834028 1815551 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, default-storageclass, volcano, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, csi-hostpath-driver, ingress, gcp-auth
	I1013 14:02:18.835547 1815551 addons.go:514] duration metric: took 6m16.456841938s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin default-storageclass volcano metrics-server yakd storage-provisioner-rancher volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I1013 14:02:18.835619 1815551 start.go:246] waiting for cluster config update ...
	I1013 14:02:18.835653 1815551 start.go:255] writing updated cluster config ...
	I1013 14:02:18.835985 1815551 ssh_runner.go:195] Run: rm -f paused
	I1013 14:02:18.844672 1815551 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:18.850989 1815551 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.858822 1815551 pod_ready.go:94] pod "coredns-66bc5c9577-h4thg" is "Ready"
	I1013 14:02:18.858851 1815551 pod_ready.go:86] duration metric: took 7.830127ms for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.861510 1815551 pod_ready.go:83] waiting for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.866947 1815551 pod_ready.go:94] pod "etcd-addons-214022" is "Ready"
	I1013 14:02:18.866978 1815551 pod_ready.go:86] duration metric: took 5.438269ms for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.870108 1815551 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.876071 1815551 pod_ready.go:94] pod "kube-apiserver-addons-214022" is "Ready"
	I1013 14:02:18.876101 1815551 pod_ready.go:86] duration metric: took 5.952573ms for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.879444 1815551 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.250700 1815551 pod_ready.go:94] pod "kube-controller-manager-addons-214022" is "Ready"
	I1013 14:02:19.250743 1815551 pod_ready.go:86] duration metric: took 371.273475ms for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.452146 1815551 pod_ready.go:83] waiting for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.850363 1815551 pod_ready.go:94] pod "kube-proxy-m9kg9" is "Ready"
	I1013 14:02:19.850396 1815551 pod_ready.go:86] duration metric: took 398.220518ms for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.050567 1815551 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449725 1815551 pod_ready.go:94] pod "kube-scheduler-addons-214022" is "Ready"
	I1013 14:02:20.449765 1815551 pod_ready.go:86] duration metric: took 399.169231ms for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449779 1815551 pod_ready.go:40] duration metric: took 1.605053066s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:20.499765 1815551 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 14:02:20.501422 1815551 out.go:179] * Done! kubectl is now configured to use "addons-214022" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4b9c2b1e8388b       56cc512116c8f       10 minutes ago      Running             busybox                   0                   c2017033bd492       busybox
	d6a3c830fdead       1bec18b3728e7       22 minutes ago      Running             controller                0                   b82d6ab22225e       ingress-nginx-controller-9cc49f96f-7jf8g
	ac9ca79606b04       8c217da6734db       22 minutes ago      Exited              patch                     0                   82e54969531ac       ingress-nginx-admission-patch-kvlpb
	fc2247488ceef       8c217da6734db       22 minutes ago      Exited              create                    0                   249a7d7c465c4       ingress-nginx-admission-create-rn6ng
	ade8e5a3e89a5       38dca7434d5f2       22 minutes ago      Running             gadget                    0                   cd47cb2e122c6       gadget-lrthv
	55e4c7d9441ba       b1c9f9ef5f0c2       22 minutes ago      Running             registry-proxy            0                   dbfd8a2965678       registry-proxy-qdl2b
	11373ec0dad23       b6ab53fbfedaa       22 minutes ago      Running             minikube-ingress-dns      0                   25d666aa48ee6       kube-ingress-dns-minikube
	61d2e3b41e535       6e38f40d628db       23 minutes ago      Running             storage-provisioner       0                   c3fcdfcb3c777       storage-provisioner
	e93bcf6b41d34       d5e667c0f2bb6       23 minutes ago      Running             amd-gpu-device-plugin     0                   dd63ea4bfdd23       amd-gpu-device-plugin-k6tpl
	836109d2ab5d3       52546a367cc9e       23 minutes ago      Running             coredns                   0                   475cb9ba95a73       coredns-66bc5c9577-h4thg
	0daa3279505d6       fc25172553d79       23 minutes ago      Running             kube-proxy                0                   85474e9f38355       kube-proxy-m9kg9
	05cee8f966b49       c80c8dbafe7dd       23 minutes ago      Running             kube-controller-manager   0                   03c96ff8163c4       kube-controller-manager-addons-214022
	b4ca1f4c451a7       5f1f5298c888d       23 minutes ago      Running             etcd                      0                   f69d756c4a41d       etcd-addons-214022
	84834930aaa27       7dd6aaa1717ab       23 minutes ago      Running             kube-scheduler            0                   246bc566c0147       kube-scheduler-addons-214022
	da79537fc9aee       c3994bc696102       23 minutes ago      Running             kube-apiserver            0                   6b21f01e5cdd5       kube-apiserver-addons-214022
	
	
	==> containerd <==
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.201507073Z" level=info msg="TearDown network for sandbox \"fc7a88bf2bbfa3783c81adc62dcffe298f2496b9564b83e09c7f5bfe49139f35\" successfully"
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.201537567Z" level=info msg="StopPodSandbox for \"fc7a88bf2bbfa3783c81adc62dcffe298f2496b9564b83e09c7f5bfe49139f35\" returns successfully"
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.202522020Z" level=info msg="RemovePodSandbox for \"fc7a88bf2bbfa3783c81adc62dcffe298f2496b9564b83e09c7f5bfe49139f35\""
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.202552763Z" level=info msg="Forcibly stopping sandbox \"fc7a88bf2bbfa3783c81adc62dcffe298f2496b9564b83e09c7f5bfe49139f35\""
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.230899157Z" level=info msg="TearDown network for sandbox \"fc7a88bf2bbfa3783c81adc62dcffe298f2496b9564b83e09c7f5bfe49139f35\" successfully"
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.237295619Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc7a88bf2bbfa3783c81adc62dcffe298f2496b9564b83e09c7f5bfe49139f35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.237570916Z" level=info msg="RemovePodSandbox \"fc7a88bf2bbfa3783c81adc62dcffe298f2496b9564b83e09c7f5bfe49139f35\" returns successfully"
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.238497245Z" level=info msg="StopPodSandbox for \"1571308a931464378bd920edb1558b36666dd38e651c529e09dfca2778d3fa0e\""
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.271478921Z" level=info msg="TearDown network for sandbox \"1571308a931464378bd920edb1558b36666dd38e651c529e09dfca2778d3fa0e\" successfully"
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.271511094Z" level=info msg="StopPodSandbox for \"1571308a931464378bd920edb1558b36666dd38e651c529e09dfca2778d3fa0e\" returns successfully"
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.272027589Z" level=info msg="RemovePodSandbox for \"1571308a931464378bd920edb1558b36666dd38e651c529e09dfca2778d3fa0e\""
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.272075471Z" level=info msg="Forcibly stopping sandbox \"1571308a931464378bd920edb1558b36666dd38e651c529e09dfca2778d3fa0e\""
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.301541536Z" level=info msg="TearDown network for sandbox \"1571308a931464378bd920edb1558b36666dd38e651c529e09dfca2778d3fa0e\" successfully"
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.310623338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1571308a931464378bd920edb1558b36666dd38e651c529e09dfca2778d3fa0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 13 14:15:59 addons-214022 containerd[816]: time="2025-10-13T14:15:59.310789005Z" level=info msg="RemovePodSandbox \"1571308a931464378bd920edb1558b36666dd38e651c529e09dfca2778d3fa0e\" returns successfully"
	Oct 13 14:17:32 addons-214022 containerd[816]: time="2025-10-13T14:17:32.378041150Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Oct 13 14:17:32 addons-214022 containerd[816]: time="2025-10-13T14:17:32.381178981Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:17:32 addons-214022 containerd[816]: time="2025-10-13T14:17:32.511939993Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:17:32 addons-214022 containerd[816]: time="2025-10-13T14:17:32.610640171Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:17:32 addons-214022 containerd[816]: time="2025-10-13T14:17:32.610845137Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10965"
	Oct 13 14:17:44 addons-214022 containerd[816]: time="2025-10-13T14:17:44.376720099Z" level=info msg="PullImage \"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\""
	Oct 13 14:17:44 addons-214022 containerd[816]: time="2025-10-13T14:17:44.379725889Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:17:44 addons-214022 containerd[816]: time="2025-10-13T14:17:44.451007639Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:17:44 addons-214022 containerd[816]: time="2025-10-13T14:17:44.550168328Z" level=error msg="PullImage \"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\" failed" error="failed to pull and unpack image \"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:17:44 addons-214022 containerd[816]: time="2025-10-13T14:17:44.550248142Z" level=info msg="stop pulling image docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: active requests=0, bytes read=10983"
	
	
	==> coredns [836109d2ab5d3098ccc6f029d103e56da702d50a57e73f14a97ae3b019a5fa1c] <==
	[INFO] 10.244.0.8:59019 - 12574 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000232812s
	[INFO] 10.244.0.8:60036 - 57979 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000182194s
	[INFO] 10.244.0.8:60036 - 18152 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000317463s
	[INFO] 10.244.0.8:60036 - 51932 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000101175s
	[INFO] 10.244.0.8:60036 - 44152 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000573995s
	[INFO] 10.244.0.8:60036 - 1962 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000305117s
	[INFO] 10.244.0.8:60036 - 56942 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.0003429s
	[INFO] 10.244.0.8:60036 - 19267 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000320803s
	[INFO] 10.244.0.8:60036 - 26656 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000350767s
	[INFO] 10.244.0.8:43439 - 41426 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000232446s
	[INFO] 10.244.0.8:43439 - 43701 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000397559s
	[INFO] 10.244.0.8:43439 - 47982 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000140315s
	[INFO] 10.244.0.8:43439 - 49030 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000179197s
	[INFO] 10.244.0.8:43439 - 55246 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000155642s
	[INFO] 10.244.0.8:43439 - 47051 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00102433s
	[INFO] 10.244.0.8:43439 - 20440 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000353779s
	[INFO] 10.244.0.8:43439 - 58549 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000146808s
	[INFO] 10.244.0.8:34453 - 56400 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000174919s
	[INFO] 10.244.0.8:34453 - 48619 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000363s
	[INFO] 10.244.0.8:34453 - 33269 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000112248s
	[INFO] 10.244.0.8:34453 - 57485 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000074519s
	[INFO] 10.244.0.8:34453 - 56231 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000470815s
	[INFO] 10.244.0.8:34453 - 52449 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000068117s
	[INFO] 10.244.0.8:34453 - 21045 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000351294s
	[INFO] 10.244.0.8:34453 - 62004 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.0000691s
	
	
	==> describe nodes <==
	Name:               addons-214022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=addons-214022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214022
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 13:55:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:19:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:15:01 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:15:01 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:15:01 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:15:01 +0000   Mon, 13 Oct 2025 13:55:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-214022
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 c368161c275346d2a9ea3f8a7f4ac862
	  System UUID:                c368161c-2753-46d2-a9ea-3f8a7f4ac862
	  Boot ID:                    687454d4-3e74-47c7-85c1-524150a13269
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  gadget                      gadget-lrthv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-7jf8g    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         23m
	  kube-system                 amd-gpu-device-plugin-k6tpl                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-66bc5c9577-h4thg                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     23m
	  kube-system                 etcd-addons-214022                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         23m
	  kube-system                 kube-apiserver-addons-214022                250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-addons-214022       200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-m9kg9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-addons-214022                100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 registry-66898fdd98-qpt8q                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 registry-proxy-qdl2b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeReady                23m                kubelet          Node addons-214022 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node addons-214022 event: Registered Node addons-214022 in Controller
	
	
	==> dmesg <==
	[ +10.023317] kauditd_printk_skb: 173 callbacks suppressed
	[ +11.926739] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.270838] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.901459] kauditd_printk_skb: 26 callbacks suppressed
	[Oct13 13:57] kauditd_printk_skb: 117 callbacks suppressed
	[  +1.255372] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.000037] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.136427] kauditd_printk_skb: 50 callbacks suppressed
	[  +4.193430] kauditd_printk_skb: 68 callbacks suppressed
	[Oct13 14:08] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000059] kauditd_printk_skb: 65 callbacks suppressed
	[ +12.058507] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000136] kauditd_printk_skb: 22 callbacks suppressed
	[Oct13 14:09] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.303382] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.474208] kauditd_printk_skb: 49 callbacks suppressed
	[Oct13 14:10] kauditd_printk_skb: 90 callbacks suppressed
	[Oct13 14:11] kauditd_printk_skb: 9 callbacks suppressed
	[ +15.690633] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.656333] kauditd_printk_skb: 21 callbacks suppressed
	[Oct13 14:13] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.000051] kauditd_printk_skb: 9 callbacks suppressed
	[Oct13 14:14] kauditd_printk_skb: 26 callbacks suppressed
	[ +24.933780] kauditd_printk_skb: 9 callbacks suppressed
	[Oct13 14:15] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [b4ca1f4c451a74c7ea64ca0e34512e160fbd260fd3969afb6e67fca08f49102b] <==
	{"level":"info","ts":"2025-10-13T13:57:23.315015Z","caller":"traceutil/trace.go:172","msg":"trace[940649486] linearizableReadLoop","detail":"{readStateIndex:1392; appliedIndex:1392; }","duration":"127.017691ms","start":"2025-10-13T13:57:23.187982Z","end":"2025-10-13T13:57:23.314999Z","steps":["trace[940649486] 'read index received'  (duration: 127.006943ms)","trace[940649486] 'applied index is now lower than readState.Index'  (duration: 4.937µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T13:57:23.315177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.178772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:23.315206Z","caller":"traceutil/trace.go:172","msg":"trace[2128069664] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:1356; }","duration":"127.222714ms","start":"2025-10-13T13:57:23.187978Z","end":"2025-10-13T13:57:23.315201Z","steps":["trace[2128069664] 'agreement among raft nodes before linearized reading'  (duration: 127.149155ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315263Z","caller":"traceutil/trace.go:172","msg":"trace[1733438696] transaction","detail":"{read_only:false; response_revision:1357; number_of_response:1; }","duration":"135.233261ms","start":"2025-10-13T13:57:23.180019Z","end":"2025-10-13T13:57:23.315253Z","steps":["trace[1733438696] 'process raft request'  (duration: 135.141996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:05:52.467650Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1907}
	{"level":"info","ts":"2025-10-13T14:05:52.575208Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1907,"took":"105.568434ms","hash":1304879421,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4886528,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2025-10-13T14:05:52.575710Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1304879421,"revision":1907,"compact-revision":-1}
	{"level":"info","ts":"2025-10-13T14:09:13.842270Z","caller":"traceutil/trace.go:172","msg":"trace[1885689359] linearizableReadLoop","detail":"{readStateIndex:3177; appliedIndex:3177; }","duration":"274.560471ms","start":"2025-10-13T14:09:13.567649Z","end":"2025-10-13T14:09:13.842209Z","steps":["trace[1885689359] 'read index received'  (duration: 274.551109ms)","trace[1885689359] 'applied index is now lower than readState.Index'  (duration: 8.253µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:13.906716Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.580668ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T14:09:13.906823Z","caller":"traceutil/trace.go:172","msg":"trace[1704629397] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2982; }","duration":"187.730839ms","start":"2025-10-13T14:09:13.719077Z","end":"2025-10-13T14:09:13.906808Z","steps":["trace[1704629397] 'range keys from in-memory index tree'  (duration: 187.538324ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T14:09:13.906973Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"339.314013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ipaddresses/10.101.151.157\" limit:1 ","response":"range_response_count:1 size:604"}
	{"level":"info","ts":"2025-10-13T14:09:13.907424Z","caller":"traceutil/trace.go:172","msg":"trace[692800306] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2983; }","duration":"346.864291ms","start":"2025-10-13T14:09:13.560497Z","end":"2025-10-13T14:09:13.907361Z","steps":["trace[692800306] 'process raft request'  (duration: 281.825137ms)","trace[692800306] 'compare'  (duration: 64.828079ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T14:09:13.907508Z","caller":"traceutil/trace.go:172","msg":"trace[107743050] range","detail":"{range_begin:/registry/ipaddresses/10.101.151.157; range_end:; response_count:1; response_revision:2982; }","duration":"339.484538ms","start":"2025-10-13T14:09:13.567635Z","end":"2025-10-13T14:09:13.907120Z","steps":["trace[107743050] 'agreement among raft nodes before linearized reading'  (duration: 274.852745ms)","trace[107743050] 'range keys from in-memory index tree'  (duration: 64.106294ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:13.907801Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.567617Z","time spent":"339.918526ms","remote":"127.0.0.1:33944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":627,"request content":"key:\"/registry/ipaddresses/10.101.151.157\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T14:09:13.908101Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.560488Z","time spent":"346.985335ms","remote":"127.0.0.1:33882","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":61,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" mod_revision:2971 > success:<request_delete_range:<key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" > > failure:<request_range:<key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" > >"}
	{"level":"info","ts":"2025-10-13T14:09:13.908220Z","caller":"traceutil/trace.go:172","msg":"trace[2073246272] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2983; }","duration":"347.573522ms","start":"2025-10-13T14:09:13.560640Z","end":"2025-10-13T14:09:13.908213Z","steps":["trace[2073246272] 'process raft request'  (duration: 346.576205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T14:09:13.908282Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.560629Z","time spent":"347.615581ms","remote":"127.0.0.1:33684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":59,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/metrics-server\" mod_revision:2972 > success:<request_delete_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > > failure:<request_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > >"}
	{"level":"warn","ts":"2025-10-13T14:09:13.910053Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.064409ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T14:09:13.910727Z","caller":"traceutil/trace.go:172","msg":"trace[1060924441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2983; }","duration":"217.741397ms","start":"2025-10-13T14:09:13.692976Z","end":"2025-10-13T14:09:13.910718Z","steps":["trace[1060924441] 'agreement among raft nodes before linearized reading'  (duration: 216.722483ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:10:52.476707Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2368}
	{"level":"info","ts":"2025-10-13T14:10:52.510907Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2368,"took":"32.98551ms","hash":1037835104,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5537792,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-13T14:10:52.510982Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1037835104,"revision":2368,"compact-revision":1907}
	{"level":"info","ts":"2025-10-13T14:15:52.484323Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":3227}
	{"level":"info","ts":"2025-10-13T14:15:52.526783Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":3227,"took":"40.905812ms","hash":1273572316,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5365760,"current-db-size-in-use":"5.4 MB"}
	{"level":"info","ts":"2025-10-13T14:15:52.526855Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1273572316,"revision":3227,"compact-revision":2368}
	
	
	==> kernel <==
	 14:19:33 up 24 min,  0 users,  load average: 0.03, 0.38, 0.56
	Linux addons-214022 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [da79537fc9aee4eda997318cc0aeef07f5a4e3bbd4aed4282ff9e486eecb0cd7] <==
	W1013 14:08:26.961310       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	I1013 14:08:27.080209       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:27.138121       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1013 14:08:28.080963       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1013 14:08:28.086493       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1013 14:08:45.022422       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:40132: use of closed network connection
	E1013 14:08:45.229592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:40168: use of closed network connection
	I1013 14:08:54.741628       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.41.148"}
	I1013 14:09:48.903970       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1013 14:11:31.775897       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1013 14:11:31.990340       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.79.22"}
	I1013 14:15:18.717939       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1013 14:15:18.718470       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1013 14:15:18.775168       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1013 14:15:18.775227       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1013 14:15:18.777503       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1013 14:15:18.777815       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1013 14:15:18.797784       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1013 14:15:18.797839       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1013 14:15:18.828831       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1013 14:15:18.828881       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1013 14:15:19.777966       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1013 14:15:19.829571       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1013 14:15:19.855224       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1013 14:15:54.491091       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [05cee8f966b4938e3d1606d404d9401b9949f288ba68c08a76c3856610945ee7] <==
	E1013 14:19:01.487584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:19:01.550131       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1013 14:19:02.644214       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:19:02.645758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:19:06.540647       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:19:06.542010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:19:06.775845       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:19:06.777097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:19:11.496879       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:19:11.499615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:19:13.505434       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:19:13.507224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:19:16.551445       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1013 14:19:19.173256       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:19:19.174731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:19:24.009628       1 csi_attacher.go:520] kubernetes.io/csi: Attach timeout after 2m0s [volume=335344c0-a83e-11f0-913e-3a596a4bac78; attachment.ID=csi-029633a715c668fdd31933ff5356d9221e60b0d14e2251c5f554c6f8e5d985c9]
	E1013 14:19:24.009956       1 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^335344c0-a83e-11f0-913e-3a596a4bac78 podName: nodeName:}" failed. No retries permitted until 2025-10-13 14:19:25.009880964 +0000 UTC m=+1413.216623939 (durationBeforeRetry 1s). Error: AttachVolume.Attach failed for volume "pvc-c902721e-fd87-4fda-9939-d6e0266b2309" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^335344c0-a83e-11f0-913e-3a596a4bac78") from node "addons-214022" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 335344c0-a83e-11f0-913e-3a596a4bac78
	I1013 14:19:25.012900       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^335344c0-a83e-11f0-913e-3a596a4bac78" nodeName="addons-214022" scheduledPods=["default/task-pv-pod"]
	E1013 14:19:28.873180       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:19:28.874487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:19:29.782770       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:19:29.784134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:19:31.552030       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1013 14:19:32.927338       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:19:32.929119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [0daa3279505d674c83f3e6813f82b58744dbeede0c9d8a5f5e902c9d9cca7441] <==
	I1013 13:56:04.284946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 13:56:04.385972       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 13:56:04.386554       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.214"]
	E1013 13:56:04.387583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 13:56:04.791284       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 13:56:04.792086       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 13:56:04.792127       1 server_linux.go:132] "Using iptables Proxier"
	I1013 13:56:04.830526       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 13:56:04.832819       1 server.go:527] "Version info" version="v1.34.1"
	I1013 13:56:04.832853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 13:56:04.853725       1 config.go:200] "Starting service config controller"
	I1013 13:56:04.853757       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 13:56:04.853901       1 config.go:106] "Starting endpoint slice config controller"
	I1013 13:56:04.853927       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 13:56:04.854547       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 13:56:04.854575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 13:56:04.862975       1 config.go:309] "Starting node config controller"
	I1013 13:56:04.863007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 13:56:04.863015       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 13:56:04.956286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 13:56:04.956330       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 13:56:04.957110       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [84834930aaa277a8e849b685332e6fb4b453bbc88da065fb1d682e6c39de1c89] <==
	E1013 13:55:54.570148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:54.570176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 13:55:54.570210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:54.570246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 13:55:54.569635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:54.571687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.412211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:55.434014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 13:55:55.466581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 13:55:55.489914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.548770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:55.605071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 13:55:55.677154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:55.682700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 13:55:55.710259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:55.717675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 13:55:55.763499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 13:55:55.780817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:55.877364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:55.895577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 13:55:55.926098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1013 13:55:58.161609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1013 14:18:53.982232       1 framework.go:1298] "Plugin failed" err="binding volumes: context deadline exceeded" plugin="VolumeBinding" pod="default/test-local-path" node="addons-214022"
	E1013 14:18:53.983094       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running PreBind plugin \"VolumeBinding\": binding volumes: context deadline exceeded" logger="UnhandledError" pod="default/test-local-path"
	E1013 14:18:55.001590       1 schedule_one.go:191] "Status after running PostFilter plugins for pod" logger="UnhandledError" pod="default/test-local-path" status="not found"
	
	
	==> kubelet <==
	Oct 13 14:18:21 addons-214022 kubelet[1511]: E1013 14:18:21.377799    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:18:26 addons-214022 kubelet[1511]: I1013 14:18:26.375318    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:18:27 addons-214022 kubelet[1511]: I1013 14:18:27.379209    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:18:27 addons-214022 kubelet[1511]: E1013 14:18:27.382013    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:18:32 addons-214022 kubelet[1511]: E1013 14:18:32.377149    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:18:33 addons-214022 kubelet[1511]: W1013 14:18:33.283203    1511 logging.go:55] [core] [Channel #72 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Oct 13 14:18:36 addons-214022 kubelet[1511]: E1013 14:18:36.376124    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:18:40 addons-214022 kubelet[1511]: I1013 14:18:40.375850    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:18:40 addons-214022 kubelet[1511]: E1013 14:18:40.377723    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:18:44 addons-214022 kubelet[1511]: E1013 14:18:44.376951    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:18:51 addons-214022 kubelet[1511]: E1013 14:18:51.376029    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:18:54 addons-214022 kubelet[1511]: I1013 14:18:54.376327    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:18:54 addons-214022 kubelet[1511]: E1013 14:18:54.378140    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:18:57 addons-214022 kubelet[1511]: E1013 14:18:57.376850    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:19:03 addons-214022 kubelet[1511]: I1013 14:19:03.376572    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qdl2b" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:19:04 addons-214022 kubelet[1511]: E1013 14:19:04.376350    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:19:08 addons-214022 kubelet[1511]: I1013 14:19:08.375448    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:19:08 addons-214022 kubelet[1511]: E1013 14:19:08.376588    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:19:10 addons-214022 kubelet[1511]: E1013 14:19:10.376169    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:19:15 addons-214022 kubelet[1511]: E1013 14:19:15.376077    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:19:21 addons-214022 kubelet[1511]: I1013 14:19:21.376972    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:19:21 addons-214022 kubelet[1511]: E1013 14:19:21.378715    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:19:22 addons-214022 kubelet[1511]: E1013 14:19:22.376363    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:19:27 addons-214022 kubelet[1511]: E1013 14:19:27.376735    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:19:32 addons-214022 kubelet[1511]: I1013 14:19:32.375452    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-k6tpl" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [61d2e3b41e535c2d6e45412739c6b7e475d5a6aef5eb620041ffb9e4f7f53d5d] <==
	W1013 14:19:09.844848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:11.848502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:11.858184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:13.861864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:13.874043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:15.877608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:15.883131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:17.886705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:17.896138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:19.900013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:19.905799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:21.910733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:21.919470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:23.924027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:23.929897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:25.934232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:25.940878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:27.944969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:27.951686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:29.956180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:29.963178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:31.966681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:31.972612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:33.977756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:19:33.984605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q: exit status 1 (95.848951ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214022/192.168.39.214
	Start Time:       Mon, 13 Oct 2025 14:11:31 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qhpgc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qhpgc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-214022
	  Normal   Pulling    4m56s (x5 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m56s (x5 over 8m2s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m56s (x5 over 8m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m53s (x20 over 8m1s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m40s (x21 over 8m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214022/192.168.39.214
	Start Time:       Mon, 13 Oct 2025 14:09:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cpq8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-cpq8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason              Age                  From                     Message
	  ----     ------              ----                 ----                     -------
	  Normal   Scheduled           10m                  default-scheduler        Successfully assigned default/task-pv-pod to addons-214022
	  Normal   Pulling             7m21s (x5 over 10m)  kubelet                  Pulling image "docker.io/nginx"
	  Warning  Failed              7m21s (x5 over 10m)  kubelet                  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed              7m21s (x5 over 10m)  kubelet                  Error: ErrImagePull
	  Warning  FailedAttachVolume  10s (x2 over 2m11s)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-c902721e-fd87-4fda-9939-d6e0266b2309" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 335344c0-a83e-11f0-913e-3a596a4bac78
	  Normal   BackOff             7s (x44 over 10m)    kubelet                  Back-off pulling image "docker.io/nginx"
	  Warning  Failed              7s (x44 over 10m)    kubelet                  Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wxvk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-8wxvk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  41s   default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: context deadline exceeded
	  Warning  FailedScheduling  39s   default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. not found

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rn6ng" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kvlpb" not found
	Error from server (NotFound): pods "registry-66898fdd98-qpt8q" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 addons disable ingress-dns --alsologtostderr -v=1: (1.679852443s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 addons disable ingress --alsologtostderr -v=1: (7.876902658s)
--- FAIL: TestAddons/parallel/Ingress (492.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (372.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1013 14:09:14.024690 1814927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1013 14:09:14.030735 1814927 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1013 14:09:14.030761 1814927 kapi.go:107] duration metric: took 6.087204ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.09688ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-214022 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-214022 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [bda8657d-2e14-4dc2-9e93-ecb85c37f5ed] Pending
helpers_test.go:352: "task-pv-pod" [bda8657d-2e14-4dc2-9e93-ecb85c37f5ed] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-13 14:15:15.624518435 +0000 UTC m=+1206.565076809
addons_test.go:567: (dbg) Run:  kubectl --context addons-214022 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-214022 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-214022/192.168.39.214
Start Time:       Mon, 13 Oct 2025 14:09:15 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cpq8h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-cpq8h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/task-pv-pod to addons-214022
Normal   Pulling    3m2s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m2s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m2s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    58s (x21 over 5m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     58s (x21 over 5m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:567: (dbg) Run:  kubectl --context addons-214022 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-214022 logs task-pv-pod -n default: exit status 1 (69.389478ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-214022 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214022 -n addons-214022
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 logs -n 25: (1.499963434s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ -o=json --download-only -p download-only-459703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                          │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ --download-only -p binary-mirror-039949 --alsologtostderr --binary-mirror http://127.0.0.1:46019 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ -p binary-mirror-039949                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ addons  │ enable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ start   │ -p addons-214022 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 14:02 UTC │
	│ addons  │ addons-214022 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ addons-214022 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ enable headlamp -p addons-214022 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ addons-214022 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:11 UTC │ 13 Oct 25 14:11 UTC │
	│ addons  │ addons-214022 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:13 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ addons-214022 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ addons-214022 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	│ addons  │ addons-214022 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:14 UTC │ 13 Oct 25 14:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 13:55:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 13:55:20.628679 1815551 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:55:20.628995 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629006 1815551 out.go:374] Setting ErrFile to fd 2...
	I1013 13:55:20.629013 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629212 1815551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 13:55:20.629832 1815551 out.go:368] Setting JSON to false
	I1013 13:55:20.630822 1815551 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":20269,"bootTime":1760343452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:55:20.630927 1815551 start.go:141] virtualization: kvm guest
	I1013 13:55:20.633155 1815551 out.go:179] * [addons-214022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 13:55:20.634757 1815551 notify.go:220] Checking for updates...
	I1013 13:55:20.634845 1815551 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 13:55:20.636374 1815551 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:55:20.637880 1815551 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:55:20.639342 1815551 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:20.640732 1815551 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 13:55:20.642003 1815551 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 13:55:20.643600 1815551 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:55:20.674859 1815551 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 13:55:20.676415 1815551 start.go:305] selected driver: kvm2
	I1013 13:55:20.676432 1815551 start.go:925] validating driver "kvm2" against <nil>
	I1013 13:55:20.676444 1815551 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 13:55:20.677121 1815551 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.677210 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.691866 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.691903 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.705734 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.705799 1815551 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 13:55:20.706090 1815551 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:55:20.706122 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:20.706178 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:20.706190 1815551 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 13:55:20.706245 1815551 start.go:349] cluster config:
	{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:20.706362 1815551 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.708302 1815551 out.go:179] * Starting "addons-214022" primary control-plane node in "addons-214022" cluster
	I1013 13:55:20.709605 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:20.709652 1815551 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 13:55:20.709667 1815551 cache.go:58] Caching tarball of preloaded images
	I1013 13:55:20.709799 1815551 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 13:55:20.709812 1815551 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 13:55:20.710191 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:20.710220 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json: {Name:mkc10ba1ef1459bd83ba3e9e0ba7c33fe1be6a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:20.710388 1815551 start.go:360] acquireMachinesLock for addons-214022: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 13:55:20.710457 1815551 start.go:364] duration metric: took 51.101µs to acquireMachinesLock for "addons-214022"
	I1013 13:55:20.710480 1815551 start.go:93] Provisioning new machine with config: &{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:55:20.710555 1815551 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 13:55:20.713031 1815551 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1013 13:55:20.713207 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:55:20.713262 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:55:20.727020 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I1013 13:55:20.727515 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:55:20.728150 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:55:20.728183 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:55:20.728607 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:55:20.728846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:20.729028 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:20.729259 1815551 start.go:159] libmachine.API.Create for "addons-214022" (driver="kvm2")
	I1013 13:55:20.729295 1815551 client.go:168] LocalClient.Create starting
	I1013 13:55:20.729337 1815551 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem
	I1013 13:55:20.759138 1815551 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem
	I1013 13:55:21.004098 1815551 main.go:141] libmachine: Running pre-create checks...
	I1013 13:55:21.004128 1815551 main.go:141] libmachine: (addons-214022) Calling .PreCreateCheck
	I1013 13:55:21.004821 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:21.005397 1815551 main.go:141] libmachine: Creating machine...
	I1013 13:55:21.005413 1815551 main.go:141] libmachine: (addons-214022) Calling .Create
	I1013 13:55:21.005675 1815551 main.go:141] libmachine: (addons-214022) creating domain...
	I1013 13:55:21.005726 1815551 main.go:141] libmachine: (addons-214022) creating network...
	I1013 13:55:21.007263 1815551 main.go:141] libmachine: (addons-214022) DBG | found existing default network
	I1013 13:55:21.007531 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.007563 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>default</name>
	I1013 13:55:21.007591 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 13:55:21.007612 1815551 main.go:141] libmachine: (addons-214022) DBG |   <forward mode='nat'>
	I1013 13:55:21.007625 1815551 main.go:141] libmachine: (addons-214022) DBG |     <nat>
	I1013 13:55:21.007636 1815551 main.go:141] libmachine: (addons-214022) DBG |       <port start='1024' end='65535'/>
	I1013 13:55:21.007652 1815551 main.go:141] libmachine: (addons-214022) DBG |     </nat>
	I1013 13:55:21.007667 1815551 main.go:141] libmachine: (addons-214022) DBG |   </forward>
	I1013 13:55:21.007675 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 13:55:21.007684 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 13:55:21.007690 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 13:55:21.007709 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.007733 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 13:55:21.007742 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.007750 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.007756 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.007766 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008295 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.008109 1815579 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045c0}
	I1013 13:55:21.008354 1815551 main.go:141] libmachine: (addons-214022) DBG | defining private network:
	I1013 13:55:21.008379 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008393 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.008408 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.008433 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.008451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.008458 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.008463 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.008471 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.008475 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.008480 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.008486 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.014811 1815551 main.go:141] libmachine: (addons-214022) DBG | creating private network mk-addons-214022 192.168.39.0/24...
	I1013 13:55:21.089953 1815551 main.go:141] libmachine: (addons-214022) DBG | private network mk-addons-214022 192.168.39.0/24 created
	I1013 13:55:21.090269 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.090299 1815551 main.go:141] libmachine: (addons-214022) setting up store path in /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.090308 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.090321 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>9289d330-dce4-4691-9e5d-0346b93e6814</uuid>
	I1013 13:55:21.090330 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1013 13:55:21.090340 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:03:10:f8'/>
	I1013 13:55:21.090351 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.090359 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.090366 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.090372 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.090379 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.090384 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.090402 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.090414 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.090424 1815551 main.go:141] libmachine: (addons-214022) building disk image from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 13:55:21.090432 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.090246 1815579 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.090457 1815551 main.go:141] libmachine: (addons-214022) Downloading /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 13:55:21.389435 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.389286 1815579 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa...
	I1013 13:55:21.573462 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573304 1815579 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk...
	I1013 13:55:21.573488 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing magic tar header
	I1013 13:55:21.573505 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing SSH key tar header
	I1013 13:55:21.573516 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573436 1815579 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.573528 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022
	I1013 13:55:21.573596 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 (perms=drwx------)
	I1013 13:55:21.573620 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines
	I1013 13:55:21.573632 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines (perms=drwxr-xr-x)
	I1013 13:55:21.573648 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube (perms=drwxr-xr-x)
	I1013 13:55:21.573659 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975 (perms=drwxrwxr-x)
	I1013 13:55:21.573667 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 13:55:21.573674 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 13:55:21.573684 1815551 main.go:141] libmachine: (addons-214022) defining domain...
	I1013 13:55:21.573701 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.573728 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975
	I1013 13:55:21.573769 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 13:55:21.573794 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins
	I1013 13:55:21.573812 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home
	I1013 13:55:21.573827 1815551 main.go:141] libmachine: (addons-214022) DBG | skipping /home - not owner
	I1013 13:55:21.574972 1815551 main.go:141] libmachine: (addons-214022) defining domain using XML: 
	I1013 13:55:21.574985 1815551 main.go:141] libmachine: (addons-214022) <domain type='kvm'>
	I1013 13:55:21.574990 1815551 main.go:141] libmachine: (addons-214022)   <name>addons-214022</name>
	I1013 13:55:21.575002 1815551 main.go:141] libmachine: (addons-214022)   <memory unit='MiB'>4096</memory>
	I1013 13:55:21.575009 1815551 main.go:141] libmachine: (addons-214022)   <vcpu>2</vcpu>
	I1013 13:55:21.575015 1815551 main.go:141] libmachine: (addons-214022)   <features>
	I1013 13:55:21.575023 1815551 main.go:141] libmachine: (addons-214022)     <acpi/>
	I1013 13:55:21.575032 1815551 main.go:141] libmachine: (addons-214022)     <apic/>
	I1013 13:55:21.575059 1815551 main.go:141] libmachine: (addons-214022)     <pae/>
	I1013 13:55:21.575077 1815551 main.go:141] libmachine: (addons-214022)   </features>
	I1013 13:55:21.575100 1815551 main.go:141] libmachine: (addons-214022)   <cpu mode='host-passthrough'>
	I1013 13:55:21.575110 1815551 main.go:141] libmachine: (addons-214022)   </cpu>
	I1013 13:55:21.575122 1815551 main.go:141] libmachine: (addons-214022)   <os>
	I1013 13:55:21.575132 1815551 main.go:141] libmachine: (addons-214022)     <type>hvm</type>
	I1013 13:55:21.575141 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='cdrom'/>
	I1013 13:55:21.575151 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='hd'/>
	I1013 13:55:21.575162 1815551 main.go:141] libmachine: (addons-214022)     <bootmenu enable='no'/>
	I1013 13:55:21.575179 1815551 main.go:141] libmachine: (addons-214022)   </os>
	I1013 13:55:21.575186 1815551 main.go:141] libmachine: (addons-214022)   <devices>
	I1013 13:55:21.575192 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='cdrom'>
	I1013 13:55:21.575201 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.575208 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.575216 1815551 main.go:141] libmachine: (addons-214022)       <readonly/>
	I1013 13:55:21.575224 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575234 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='disk'>
	I1013 13:55:21.575251 1815551 main.go:141] libmachine: (addons-214022)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 13:55:21.575272 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.575286 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.575296 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575307 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575317 1815551 main.go:141] libmachine: (addons-214022)       <source network='mk-addons-214022'/>
	I1013 13:55:21.575329 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575339 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575356 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575374 1815551 main.go:141] libmachine: (addons-214022)       <source network='default'/>
	I1013 13:55:21.575392 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575408 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575416 1815551 main.go:141] libmachine: (addons-214022)     <serial type='pty'>
	I1013 13:55:21.575422 1815551 main.go:141] libmachine: (addons-214022)       <target port='0'/>
	I1013 13:55:21.575433 1815551 main.go:141] libmachine: (addons-214022)     </serial>
	I1013 13:55:21.575443 1815551 main.go:141] libmachine: (addons-214022)     <console type='pty'>
	I1013 13:55:21.575453 1815551 main.go:141] libmachine: (addons-214022)       <target type='serial' port='0'/>
	I1013 13:55:21.575463 1815551 main.go:141] libmachine: (addons-214022)     </console>
	I1013 13:55:21.575475 1815551 main.go:141] libmachine: (addons-214022)     <rng model='virtio'>
	I1013 13:55:21.575486 1815551 main.go:141] libmachine: (addons-214022)       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.575495 1815551 main.go:141] libmachine: (addons-214022)     </rng>
	I1013 13:55:21.575507 1815551 main.go:141] libmachine: (addons-214022)   </devices>
	I1013 13:55:21.575519 1815551 main.go:141] libmachine: (addons-214022) </domain>
	I1013 13:55:21.575530 1815551 main.go:141] libmachine: (addons-214022) 
	I1013 13:55:21.580981 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:54:97:7f in network default
	I1013 13:55:21.581682 1815551 main.go:141] libmachine: (addons-214022) starting domain...
	I1013 13:55:21.581698 1815551 main.go:141] libmachine: (addons-214022) ensuring networks are active...
	I1013 13:55:21.581746 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:21.582514 1815551 main.go:141] libmachine: (addons-214022) Ensuring network default is active
	I1013 13:55:21.583076 1815551 main.go:141] libmachine: (addons-214022) Ensuring network mk-addons-214022 is active
	I1013 13:55:21.583880 1815551 main.go:141] libmachine: (addons-214022) getting domain XML...
	I1013 13:55:21.585201 1815551 main.go:141] libmachine: (addons-214022) DBG | starting domain XML:
	I1013 13:55:21.585220 1815551 main.go:141] libmachine: (addons-214022) DBG | <domain type='kvm'>
	I1013 13:55:21.585231 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>addons-214022</name>
	I1013 13:55:21.585241 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c368161c-2753-46d2-a9ea-3f8a7f4ac862</uuid>
	I1013 13:55:21.585272 1815551 main.go:141] libmachine: (addons-214022) DBG |   <memory unit='KiB'>4194304</memory>
	I1013 13:55:21.585285 1815551 main.go:141] libmachine: (addons-214022) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1013 13:55:21.585295 1815551 main.go:141] libmachine: (addons-214022) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 13:55:21.585304 1815551 main.go:141] libmachine: (addons-214022) DBG |   <os>
	I1013 13:55:21.585317 1815551 main.go:141] libmachine: (addons-214022) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 13:55:21.585324 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='cdrom'/>
	I1013 13:55:21.585329 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='hd'/>
	I1013 13:55:21.585345 1815551 main.go:141] libmachine: (addons-214022) DBG |     <bootmenu enable='no'/>
	I1013 13:55:21.585358 1815551 main.go:141] libmachine: (addons-214022) DBG |   </os>
	I1013 13:55:21.585369 1815551 main.go:141] libmachine: (addons-214022) DBG |   <features>
	I1013 13:55:21.585391 1815551 main.go:141] libmachine: (addons-214022) DBG |     <acpi/>
	I1013 13:55:21.585403 1815551 main.go:141] libmachine: (addons-214022) DBG |     <apic/>
	I1013 13:55:21.585411 1815551 main.go:141] libmachine: (addons-214022) DBG |     <pae/>
	I1013 13:55:21.585428 1815551 main.go:141] libmachine: (addons-214022) DBG |   </features>
	I1013 13:55:21.585439 1815551 main.go:141] libmachine: (addons-214022) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 13:55:21.585443 1815551 main.go:141] libmachine: (addons-214022) DBG |   <clock offset='utc'/>
	I1013 13:55:21.585451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 13:55:21.585456 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_reboot>restart</on_reboot>
	I1013 13:55:21.585464 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_crash>destroy</on_crash>
	I1013 13:55:21.585467 1815551 main.go:141] libmachine: (addons-214022) DBG |   <devices>
	I1013 13:55:21.585476 1815551 main.go:141] libmachine: (addons-214022) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 13:55:21.585483 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='cdrom'>
	I1013 13:55:21.585490 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw'/>
	I1013 13:55:21.585499 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.585530 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.585553 1815551 main.go:141] libmachine: (addons-214022) DBG |       <readonly/>
	I1013 13:55:21.585566 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 13:55:21.585582 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585595 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='disk'>
	I1013 13:55:21.585608 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 13:55:21.585626 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.585638 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.585652 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 13:55:21.585666 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585680 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 13:55:21.585693 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 13:55:21.585706 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585726 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 13:55:21.585741 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 13:55:21.585760 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 13:55:21.585769 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585773 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585778 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:45:c6:7b'/>
	I1013 13:55:21.585783 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='mk-addons-214022'/>
	I1013 13:55:21.585787 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585793 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 13:55:21.585797 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585801 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585806 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:54:97:7f'/>
	I1013 13:55:21.585810 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='default'/>
	I1013 13:55:21.585815 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585823 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 13:55:21.585828 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585834 1815551 main.go:141] libmachine: (addons-214022) DBG |     <serial type='pty'>
	I1013 13:55:21.585840 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='isa-serial' port='0'>
	I1013 13:55:21.585849 1815551 main.go:141] libmachine: (addons-214022) DBG |         <model name='isa-serial'/>
	I1013 13:55:21.585856 1815551 main.go:141] libmachine: (addons-214022) DBG |       </target>
	I1013 13:55:21.585860 1815551 main.go:141] libmachine: (addons-214022) DBG |     </serial>
	I1013 13:55:21.585867 1815551 main.go:141] libmachine: (addons-214022) DBG |     <console type='pty'>
	I1013 13:55:21.585871 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='serial' port='0'/>
	I1013 13:55:21.585878 1815551 main.go:141] libmachine: (addons-214022) DBG |     </console>
	I1013 13:55:21.585882 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='mouse' bus='ps2'/>
	I1013 13:55:21.585888 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 13:55:21.585895 1815551 main.go:141] libmachine: (addons-214022) DBG |     <audio id='1' type='none'/>
	I1013 13:55:21.585900 1815551 main.go:141] libmachine: (addons-214022) DBG |     <memballoon model='virtio'>
	I1013 13:55:21.585905 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 13:55:21.585912 1815551 main.go:141] libmachine: (addons-214022) DBG |     </memballoon>
	I1013 13:55:21.585920 1815551 main.go:141] libmachine: (addons-214022) DBG |     <rng model='virtio'>
	I1013 13:55:21.585937 1815551 main.go:141] libmachine: (addons-214022) DBG |       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.585942 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 13:55:21.585947 1815551 main.go:141] libmachine: (addons-214022) DBG |     </rng>
	I1013 13:55:21.585950 1815551 main.go:141] libmachine: (addons-214022) DBG |   </devices>
	I1013 13:55:21.585955 1815551 main.go:141] libmachine: (addons-214022) DBG | </domain>
	I1013 13:55:21.585958 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.998506 1815551 main.go:141] libmachine: (addons-214022) waiting for domain to start...
	I1013 13:55:21.999992 1815551 main.go:141] libmachine: (addons-214022) domain is now running
	I1013 13:55:22.000011 1815551 main.go:141] libmachine: (addons-214022) waiting for IP...
	I1013 13:55:22.000803 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.001255 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.001289 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.001544 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.001627 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.001556 1815579 retry.go:31] will retry after 233.588452ms: waiting for domain to come up
	I1013 13:55:22.236968 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.237478 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.237508 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.237876 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.237928 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.237848 1815579 retry.go:31] will retry after 300.8157ms: waiting for domain to come up
	I1013 13:55:22.540639 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.541271 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.541302 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.541621 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.541683 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.541605 1815579 retry.go:31] will retry after 377.651668ms: waiting for domain to come up
	I1013 13:55:22.921184 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.921783 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.921814 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.922148 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.922237 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.922151 1815579 retry.go:31] will retry after 510.251488ms: waiting for domain to come up
	I1013 13:55:23.433846 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:23.434356 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:23.434384 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:23.434622 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:23.434651 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:23.434592 1815579 retry.go:31] will retry after 738.765721ms: waiting for domain to come up
	I1013 13:55:24.174730 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:24.175220 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:24.175261 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:24.175609 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:24.175645 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:24.175615 1815579 retry.go:31] will retry after 941.377797ms: waiting for domain to come up
	I1013 13:55:25.118416 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.119134 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.119161 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.119505 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.119531 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.119464 1815579 retry.go:31] will retry after 715.698221ms: waiting for domain to come up
	I1013 13:55:25.837169 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.837602 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.837632 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.837919 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.837956 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.837912 1815579 retry.go:31] will retry after 1.477632519s: waiting for domain to come up
	I1013 13:55:27.317869 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:27.318416 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:27.318445 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:27.318730 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:27.318828 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:27.318742 1815579 retry.go:31] will retry after 1.752025896s: waiting for domain to come up
	I1013 13:55:29.072255 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:29.072804 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:29.072827 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:29.073152 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:29.073218 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:29.073146 1815579 retry.go:31] will retry after 1.890403935s: waiting for domain to come up
	I1013 13:55:30.965205 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:30.965861 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:30.965889 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:30.966181 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:30.966249 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:30.966169 1815579 retry.go:31] will retry after 2.015469115s: waiting for domain to come up
	I1013 13:55:32.984641 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:32.985205 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:32.985254 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:32.985538 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:32.985566 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:32.985483 1815579 retry.go:31] will retry after 3.162648802s: waiting for domain to come up
	I1013 13:55:36.149428 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150058 1815551 main.go:141] libmachine: (addons-214022) found domain IP: 192.168.39.214
	I1013 13:55:36.150084 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has current primary IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150093 1815551 main.go:141] libmachine: (addons-214022) reserving static IP address...
	I1013 13:55:36.150509 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find host DHCP lease matching {name: "addons-214022", mac: "52:54:00:45:c6:7b", ip: "192.168.39.214"} in network mk-addons-214022
	I1013 13:55:36.359631 1815551 main.go:141] libmachine: (addons-214022) DBG | Getting to WaitForSSH function...
	I1013 13:55:36.359656 1815551 main.go:141] libmachine: (addons-214022) reserved static IP address 192.168.39.214 for domain addons-214022
	I1013 13:55:36.359708 1815551 main.go:141] libmachine: (addons-214022) waiting for SSH...
	I1013 13:55:36.362970 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363545 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.363578 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363975 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH client type: external
	I1013 13:55:36.364005 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa (-rw-------)
	I1013 13:55:36.364071 1815551 main.go:141] libmachine: (addons-214022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 13:55:36.364096 1815551 main.go:141] libmachine: (addons-214022) DBG | About to run SSH command:
	I1013 13:55:36.364112 1815551 main.go:141] libmachine: (addons-214022) DBG | exit 0
	I1013 13:55:36.500938 1815551 main.go:141] libmachine: (addons-214022) DBG | SSH cmd err, output: <nil>: 
	I1013 13:55:36.501251 1815551 main.go:141] libmachine: (addons-214022) domain creation complete
	I1013 13:55:36.501689 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:36.502339 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502549 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502694 1815551 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1013 13:55:36.502705 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:55:36.504172 1815551 main.go:141] libmachine: Detecting operating system of created instance...
	I1013 13:55:36.504188 1815551 main.go:141] libmachine: Waiting for SSH to be available...
	I1013 13:55:36.504195 1815551 main.go:141] libmachine: Getting to WaitForSSH function...
	I1013 13:55:36.504201 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.507156 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507596 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.507626 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507811 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.508003 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508123 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508334 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.508503 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.508771 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.508786 1815551 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1013 13:55:36.609679 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.609706 1815551 main.go:141] libmachine: Detecting the provisioner...
	I1013 13:55:36.609725 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.612870 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613343 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.613380 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613602 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.613846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614017 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614155 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.614343 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.614556 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.614568 1815551 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1013 13:55:36.717397 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1013 13:55:36.717477 1815551 main.go:141] libmachine: found compatible host: buildroot
	I1013 13:55:36.717487 1815551 main.go:141] libmachine: Provisioning with buildroot...
	I1013 13:55:36.717495 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.717788 1815551 buildroot.go:166] provisioning hostname "addons-214022"
	I1013 13:55:36.717829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.718042 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.721497 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.721988 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.722027 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.722260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.722429 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722542 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722660 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.722864 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.723104 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.723120 1815551 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214022 && echo "addons-214022" | sudo tee /etc/hostname
	I1013 13:55:36.853529 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214022
	
	I1013 13:55:36.853563 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.856617 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857071 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.857100 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.857522 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857852 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.858072 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.858351 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.858378 1815551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214022/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 13:55:36.978438 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.978492 1815551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 13:55:36.978561 1815551 buildroot.go:174] setting up certificates
	I1013 13:55:36.978581 1815551 provision.go:84] configureAuth start
	I1013 13:55:36.978601 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.978932 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:36.982111 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982557 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.982587 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982769 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.985722 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986132 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.986153 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986337 1815551 provision.go:143] copyHostCerts
	I1013 13:55:36.986421 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 13:55:36.986610 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 13:55:36.986700 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 13:55:36.986789 1815551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.addons-214022 san=[127.0.0.1 192.168.39.214 addons-214022 localhost minikube]
	I1013 13:55:37.044634 1815551 provision.go:177] copyRemoteCerts
	I1013 13:55:37.044706 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 13:55:37.044750 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.047881 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048238 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.048268 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048531 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.048757 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.048938 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.049093 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.132357 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 13:55:37.163230 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 13:55:37.193519 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 13:55:37.228041 1815551 provision.go:87] duration metric: took 249.44117ms to configureAuth
	I1013 13:55:37.228073 1815551 buildroot.go:189] setting minikube options for container-runtime
	I1013 13:55:37.228284 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:55:37.228308 1815551 main.go:141] libmachine: Checking connection to Docker...
	I1013 13:55:37.228319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetURL
	I1013 13:55:37.229621 1815551 main.go:141] libmachine: (addons-214022) DBG | using libvirt version 8000000
	I1013 13:55:37.231977 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232573 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.232594 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232944 1815551 main.go:141] libmachine: Docker is up and running!
	I1013 13:55:37.232959 1815551 main.go:141] libmachine: Reticulating splines...
	I1013 13:55:37.232967 1815551 client.go:171] duration metric: took 16.503662992s to LocalClient.Create
	I1013 13:55:37.232989 1815551 start.go:167] duration metric: took 16.503732898s to libmachine.API.Create "addons-214022"
	I1013 13:55:37.232996 1815551 start.go:293] postStartSetup for "addons-214022" (driver="kvm2")
	I1013 13:55:37.233004 1815551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 13:55:37.233019 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.233334 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 13:55:37.233364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.236079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236495 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.236524 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.237136 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.237319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.237840 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.320344 1815551 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 13:55:37.325903 1815551 info.go:137] Remote host: Buildroot 2025.02
	I1013 13:55:37.325945 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 13:55:37.326098 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 13:55:37.326125 1815551 start.go:296] duration metric: took 93.124024ms for postStartSetup
	I1013 13:55:37.326165 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:37.326907 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.329757 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330258 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.330288 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330612 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:37.330830 1815551 start.go:128] duration metric: took 16.620261949s to createHost
	I1013 13:55:37.330855 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.334094 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334644 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.334674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334903 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.335118 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335505 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.335738 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:37.336080 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:37.336099 1815551 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 13:55:37.453534 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760363737.403582342
	
	I1013 13:55:37.453567 1815551 fix.go:216] guest clock: 1760363737.403582342
	I1013 13:55:37.453576 1815551 fix.go:229] Guest: 2025-10-13 13:55:37.403582342 +0000 UTC Remote: 2025-10-13 13:55:37.33084379 +0000 UTC m=+16.741419072 (delta=72.738552ms)
	I1013 13:55:37.453601 1815551 fix.go:200] guest clock delta is within tolerance: 72.738552ms
	I1013 13:55:37.453614 1815551 start.go:83] releasing machines lock for "addons-214022", held for 16.74313679s
	I1013 13:55:37.453644 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.453996 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.457079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457464 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.457495 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457681 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458199 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458359 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458457 1815551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 13:55:37.458521 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.458571 1815551 ssh_runner.go:195] Run: cat /version.json
	I1013 13:55:37.458594 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.461592 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462001 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462030 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462059 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462230 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.462419 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.462580 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.462613 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462638 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462750 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.462894 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.463074 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.463216 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.463355 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.568362 1815551 ssh_runner.go:195] Run: systemctl --version
	I1013 13:55:37.574961 1815551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 13:55:37.581570 1815551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 13:55:37.581652 1815551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 13:55:37.601744 1815551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 13:55:37.601771 1815551 start.go:495] detecting cgroup driver to use...
	I1013 13:55:37.601855 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 13:55:37.636399 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 13:55:37.653284 1815551 docker.go:218] disabling cri-docker service (if available) ...
	I1013 13:55:37.653349 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 13:55:37.671035 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 13:55:37.687997 1815551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 13:55:37.835046 1815551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 13:55:38.036660 1815551 docker.go:234] disabling docker service ...
	I1013 13:55:38.036785 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 13:55:38.054634 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 13:55:38.070992 1815551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 13:55:38.226219 1815551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 13:55:38.375133 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 13:55:38.391629 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 13:55:38.415622 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 13:55:38.428382 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 13:55:38.441166 1815551 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 13:55:38.441271 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 13:55:38.454185 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.467219 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 13:55:38.480016 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.493623 1815551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 13:55:38.507533 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 13:55:38.520643 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 13:55:38.533755 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 13:55:38.546971 1815551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 13:55:38.557881 1815551 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 13:55:38.557958 1815551 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 13:55:38.578224 1815551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 13:55:38.590424 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:38.732726 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:38.770576 1815551 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 13:55:38.770707 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:38.776353 1815551 retry.go:31] will retry after 1.261164496s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 13:55:40.038886 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:40.045830 1815551 start.go:563] Will wait 60s for crictl version
	I1013 13:55:40.045914 1815551 ssh_runner.go:195] Run: which crictl
	I1013 13:55:40.050941 1815551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 13:55:40.093318 1815551 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 13:55:40.093432 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.123924 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.255787 1815551 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 13:55:40.331568 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:40.334892 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335313 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:40.335337 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335632 1815551 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 13:55:40.341286 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:40.357723 1815551 kubeadm.go:883] updating cluster {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 13:55:40.357874 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:40.357947 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:40.395630 1815551 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 13:55:40.395736 1815551 ssh_runner.go:195] Run: which lz4
	I1013 13:55:40.400778 1815551 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 13:55:40.406306 1815551 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 13:55:40.406344 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (409015552 bytes)
	I1013 13:55:41.943253 1815551 containerd.go:563] duration metric: took 1.54249613s to copy over tarball
	I1013 13:55:41.943351 1815551 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 13:55:43.492564 1815551 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.549175583s)
	I1013 13:55:43.492596 1815551 containerd.go:570] duration metric: took 1.549300388s to extract the tarball
	I1013 13:55:43.492604 1815551 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 13:55:43.534655 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:43.680421 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:43.727538 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.770225 1815551 retry.go:31] will retry after 129.297012ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T13:55:43Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1013 13:55:43.900675 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.942782 1815551 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 13:55:43.942818 1815551 cache_images.go:85] Images are preloaded, skipping loading
	I1013 13:55:43.942831 1815551 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.34.1 containerd true true} ...
	I1013 13:55:43.942973 1815551 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 13:55:43.943036 1815551 ssh_runner.go:195] Run: sudo crictl info
	I1013 13:55:43.983500 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:43.983527 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:43.983547 1815551 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 13:55:43.983572 1815551 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214022 NodeName:addons-214022 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 13:55:43.983683 1815551 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-214022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.214"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 13:55:43.983786 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 13:55:43.997492 1815551 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 13:55:43.997569 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 13:55:44.009940 1815551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1013 13:55:44.032456 1815551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 13:55:44.055201 1815551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1013 13:55:44.077991 1815551 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I1013 13:55:44.082755 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:44.102001 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:44.250454 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:55:44.271759 1815551 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022 for IP: 192.168.39.214
	I1013 13:55:44.271804 1815551 certs.go:195] generating shared ca certs ...
	I1013 13:55:44.271849 1815551 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.272058 1815551 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 13:55:44.515410 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt ...
	I1013 13:55:44.515443 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt: {Name:mk7e93844bf7a5315c584d29c143e2135009c9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515626 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key ...
	I1013 13:55:44.515639 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key: {Name:mk2370dd9470838be70f5ff73870ee78eaf49615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515736 1815551 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 13:55:44.688770 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt ...
	I1013 13:55:44.688804 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt: {Name:mk17069980c2ce75e576b93cf8d09a188d68e9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.688989 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key ...
	I1013 13:55:44.689002 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key: {Name:mk6b5175fc3e29304600d26ae322daa345a1af96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.689075 1815551 certs.go:257] generating profile certs ...
	I1013 13:55:44.689137 1815551 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key
	I1013 13:55:44.689163 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt with IP's: []
	I1013 13:55:45.249037 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt ...
	I1013 13:55:45.249073 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: {Name:mk280462c7f89663f3ca7afb3f0492dd2b0ee285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249251 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key ...
	I1013 13:55:45.249263 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key: {Name:mk559b21297b9d07a442f449010608571723a06a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249350 1815551 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114
	I1013 13:55:45.249370 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I1013 13:55:45.485539 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 ...
	I1013 13:55:45.485568 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114: {Name:mkd1f4b4fe453f9f52532a7d0522a77f6292f9c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485740 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 ...
	I1013 13:55:45.485755 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114: {Name:mk7e630cb0d73800acc236df973e9041d684cea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485833 1815551 certs.go:382] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt
	I1013 13:55:45.485922 1815551 certs.go:386] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key
	I1013 13:55:45.485980 1815551 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key
	I1013 13:55:45.485998 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt with IP's: []
	I1013 13:55:45.781914 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt ...
	I1013 13:55:45.781958 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt: {Name:mk2c046b91ab288417107efe4a8ee37eb796f0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782135 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key ...
	I1013 13:55:45.782151 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key: {Name:mk11ba110c07b71583dc1e7a37e3c7830733bcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782356 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 13:55:45.782394 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 13:55:45.782417 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 13:55:45.782439 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 13:55:45.783086 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 13:55:45.815352 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 13:55:45.846541 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 13:55:45.880232 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 13:55:45.924466 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 13:55:45.962160 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 13:55:45.999510 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 13:55:46.034971 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 13:55:46.068482 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 13:55:46.099803 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 13:55:46.121270 1815551 ssh_runner.go:195] Run: openssl version
	I1013 13:55:46.128266 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 13:55:46.142449 1815551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148226 1815551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148313 1815551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.155940 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 13:55:46.170023 1815551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 13:55:46.175480 1815551 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 13:55:46.175554 1815551 kubeadm.go:400] StartCluster: {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:46.175652 1815551 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 13:55:46.175759 1815551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 13:55:46.214377 1815551 cri.go:89] found id: ""
	I1013 13:55:46.214475 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 13:55:46.227534 1815551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 13:55:46.239809 1815551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 13:55:46.253443 1815551 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 13:55:46.253466 1815551 kubeadm.go:157] found existing configuration files:
	
	I1013 13:55:46.253514 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 13:55:46.265630 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 13:55:46.265706 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 13:55:46.278450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 13:55:46.290243 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 13:55:46.290303 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 13:55:46.303207 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.315748 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 13:55:46.315819 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.328450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 13:55:46.340422 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 13:55:46.340491 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 13:55:46.353088 1815551 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 13:55:46.409861 1815551 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 13:55:46.409939 1815551 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 13:55:46.510451 1815551 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 13:55:46.510548 1815551 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 13:55:46.510736 1815551 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 13:55:46.519844 1815551 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 13:55:46.532700 1815551 out.go:252]   - Generating certificates and keys ...
	I1013 13:55:46.532819 1815551 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 13:55:46.532896 1815551 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 13:55:46.783435 1815551 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 13:55:47.020350 1815551 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 13:55:47.775782 1815551 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 13:55:48.011804 1815551 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 13:55:48.461103 1815551 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 13:55:48.461301 1815551 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.750774 1815551 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 13:55:48.751132 1815551 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.831944 1815551 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 13:55:49.085300 1815551 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 13:55:49.215416 1815551 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 13:55:49.215485 1815551 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 13:55:49.341619 1815551 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 13:55:49.552784 1815551 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 13:55:49.595942 1815551 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 13:55:49.670226 1815551 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 13:55:49.887570 1815551 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 13:55:49.888048 1815551 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 13:55:49.890217 1815551 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 13:55:49.891956 1815551 out.go:252]   - Booting up control plane ...
	I1013 13:55:49.892075 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 13:55:49.892175 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 13:55:49.892283 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 13:55:49.915573 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 13:55:49.915702 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 13:55:49.926506 1815551 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 13:55:49.926635 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 13:55:49.926699 1815551 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 13:55:50.104649 1815551 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 13:55:50.104896 1815551 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 13:55:51.105517 1815551 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001950535s
	I1013 13:55:51.110678 1815551 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 13:55:51.110781 1815551 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.214:8443/livez
	I1013 13:55:51.110862 1815551 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 13:55:51.110934 1815551 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 13:55:53.698826 1815551 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.589717518s
	I1013 13:55:54.571486 1815551 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.462849107s
	I1013 13:55:56.609645 1815551 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502421023s
	I1013 13:55:56.625086 1815551 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 13:55:56.642185 1815551 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 13:55:56.660063 1815551 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 13:55:56.660353 1815551 kubeadm.go:318] [mark-control-plane] Marking the node addons-214022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 13:55:56.677664 1815551 kubeadm.go:318] [bootstrap-token] Using token: yho7iw.8cmp1omdihpr13ia
	I1013 13:55:56.680503 1815551 out.go:252]   - Configuring RBAC rules ...
	I1013 13:55:56.680644 1815551 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 13:55:56.691921 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 13:55:56.701832 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 13:55:56.706581 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 13:55:56.711586 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 13:55:56.720960 1815551 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 13:55:57.019012 1815551 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 13:55:57.510749 1815551 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 13:55:58.017894 1815551 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 13:55:58.019641 1815551 kubeadm.go:318] 
	I1013 13:55:58.019746 1815551 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 13:55:58.019759 1815551 kubeadm.go:318] 
	I1013 13:55:58.019856 1815551 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 13:55:58.019866 1815551 kubeadm.go:318] 
	I1013 13:55:58.019906 1815551 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 13:55:58.019991 1815551 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 13:55:58.020075 1815551 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 13:55:58.020087 1815551 kubeadm.go:318] 
	I1013 13:55:58.020135 1815551 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 13:55:58.020180 1815551 kubeadm.go:318] 
	I1013 13:55:58.020272 1815551 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 13:55:58.020284 1815551 kubeadm.go:318] 
	I1013 13:55:58.020355 1815551 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 13:55:58.020465 1815551 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 13:55:58.020560 1815551 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 13:55:58.020570 1815551 kubeadm.go:318] 
	I1013 13:55:58.020696 1815551 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 13:55:58.020841 1815551 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 13:55:58.020863 1815551 kubeadm.go:318] 
	I1013 13:55:58.021022 1815551 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021178 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa \
	I1013 13:55:58.021220 1815551 kubeadm.go:318] 	--control-plane 
	I1013 13:55:58.021227 1815551 kubeadm.go:318] 
	I1013 13:55:58.021356 1815551 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 13:55:58.021366 1815551 kubeadm.go:318] 
	I1013 13:55:58.021480 1815551 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021613 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa 
	I1013 13:55:58.023899 1815551 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 13:55:58.023930 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:58.023940 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:58.026381 1815551 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 13:55:58.028311 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 13:55:58.043778 1815551 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 13:55:58.076261 1815551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 13:55:58.076355 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.076389 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214022 minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=addons-214022 minikube.k8s.io/primary=true
	I1013 13:55:58.125421 1815551 ops.go:34] apiserver oom_adj: -16
	I1013 13:55:58.249972 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.750645 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.250461 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.750623 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.250758 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.750903 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.250112 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.750238 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.250999 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.377634 1815551 kubeadm.go:1113] duration metric: took 4.301363742s to wait for elevateKubeSystemPrivileges
	I1013 13:56:02.377670 1815551 kubeadm.go:402] duration metric: took 16.202122758s to StartCluster
	I1013 13:56:02.377691 1815551 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.377852 1815551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:56:02.378374 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.378641 1815551 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:56:02.378701 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 13:56:02.378727 1815551 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 13:56:02.378856 1815551 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214022"
	I1013 13:56:02.378871 1815551 addons.go:69] Setting yakd=true in profile "addons-214022"
	I1013 13:56:02.378888 1815551 addons.go:238] Setting addon yakd=true in "addons-214022"
	I1013 13:56:02.378915 1815551 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:02.378924 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378926 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.378954 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378945 1815551 addons.go:69] Setting default-storageclass=true in profile "addons-214022"
	I1013 13:56:02.378942 1815551 addons.go:69] Setting gcp-auth=true in profile "addons-214022"
	I1013 13:56:02.378975 1815551 addons.go:69] Setting cloud-spanner=true in profile "addons-214022"
	I1013 13:56:02.378978 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214022"
	I1013 13:56:02.378963 1815551 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.378988 1815551 mustload.go:65] Loading cluster: addons-214022
	I1013 13:56:02.378999 1815551 addons.go:69] Setting registry=true in profile "addons-214022"
	I1013 13:56:02.379046 1815551 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214022"
	I1013 13:56:02.379058 1815551 addons.go:238] Setting addon registry=true in "addons-214022"
	I1013 13:56:02.379079 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379103 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379214 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.379427 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.378987 1815551 addons.go:238] Setting addon cloud-spanner=true in "addons-214022"
	I1013 13:56:02.379425 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379478 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379483 1815551 addons.go:69] Setting storage-provisioner=true in profile "addons-214022"
	I1013 13:56:02.379488 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379497 1815551 addons.go:238] Setting addon storage-provisioner=true in "addons-214022"
	I1013 13:56:02.379503 1815551 addons.go:69] Setting ingress=true in profile "addons-214022"
	I1013 13:56:02.379519 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379522 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379532 1815551 addons.go:69] Setting ingress-dns=true in profile "addons-214022"
	I1013 13:56:02.379546 1815551 addons.go:238] Setting addon ingress-dns=true in "addons-214022"
	I1013 13:56:02.379575 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379616 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379653 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379682 1815551 addons.go:69] Setting volumesnapshots=true in profile "addons-214022"
	I1013 13:56:02.379814 1815551 addons.go:238] Setting addon volumesnapshots=true in "addons-214022"
	I1013 13:56:02.379879 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379926 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379490 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379965 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379979 1815551 addons.go:69] Setting metrics-server=true in profile "addons-214022"
	I1013 13:56:02.379992 1815551 addons.go:238] Setting addon metrics-server=true in "addons-214022"
	I1013 13:56:02.380013 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379520 1815551 addons.go:238] Setting addon ingress=true in "addons-214022"
	I1013 13:56:02.379924 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380064 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380076 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380107 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380112 1815551 addons.go:69] Setting inspektor-gadget=true in profile "addons-214022"
	I1013 13:56:02.380125 1815551 addons.go:238] Setting addon inspektor-gadget=true in "addons-214022"
	I1013 13:56:02.380158 1815551 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.380221 1815551 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214022"
	I1013 13:56:02.380272 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380445 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380510 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379699 1815551 addons.go:69] Setting volcano=true in profile "addons-214022"
	I1013 13:56:02.380559 1815551 addons.go:238] Setting addon volcano=true in "addons-214022"
	I1013 13:56:02.380613 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380634 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380666 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380790 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380832 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380876 1815551 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214022"
	I1013 13:56:02.380894 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214022"
	I1013 13:56:02.379472 1815551 addons.go:69] Setting registry-creds=true in profile "addons-214022"
	I1013 13:56:02.381003 1815551 addons.go:238] Setting addon registry-creds=true in "addons-214022"
	I1013 13:56:02.381112 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.381265 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.381293 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.381341 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.382020 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.382057 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.382817 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.383259 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.383291 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384195 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384256 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384286 1815551 out.go:179] * Verifying Kubernetes components...
	I1013 13:56:02.384291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.384732 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384782 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.387093 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:56:02.392106 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.392163 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.396083 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.396162 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.410131 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I1013 13:56:02.411431 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1013 13:56:02.412218 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.412918 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.412942 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.413748 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.414498 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.415229 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.415286 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.415822 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.415843 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.420030 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I1013 13:56:02.420041 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I1013 13:56:02.420259 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I1013 13:56:02.420298 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I1013 13:56:02.420346 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.420406 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I1013 13:56:02.420930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421041 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421071 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.421170 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421581 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421600 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421753 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421769 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421819 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421832 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.422190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422264 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422931 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.422976 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.423789 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.424161 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.424211 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.427224 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I1013 13:56:02.427390 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38041
	I1013 13:56:02.427782 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.427837 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428131 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.428460 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.428533 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.428569 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428840 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.429601 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429621 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.429774 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429786 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.430349 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430508 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.430777 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430880 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431609 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.431937 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.431967 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431989 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.432062 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432169 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432395 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.432603 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.432771 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.433653 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.433706 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.433998 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.434042 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.434547 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32821
	I1013 13:56:02.441970 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1013 13:56:02.442071 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I1013 13:56:02.442458 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.442810 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.443536 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443557 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.443796 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443813 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.444423 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.444487 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.445199 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.445303 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.445921 1815551 addons.go:238] Setting addon default-storageclass=true in "addons-214022"
	I1013 13:56:02.445974 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.446387 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.446430 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.447853 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1013 13:56:02.447930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448413 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448652 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.448673 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449317 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.449355 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449911 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450071 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450759 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.450802 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.452824 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1013 13:56:02.453268 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.453309 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.453388 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.453909 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.453944 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.454377 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.454945 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.455002 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.457726 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41047
	I1013 13:56:02.458946 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42105
	I1013 13:56:02.459841 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.460456 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.460471 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.460997 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.461059 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.461190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.461893 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.462087 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.463029 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I1013 13:56:02.463622 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.464283 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.464301 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.467881 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.468766 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1013 13:56:02.468880 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.470158 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.470767 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.470785 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.471160 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
	I1013 13:56:02.471380 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.471463 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.471745 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.472514 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I1013 13:56:02.474011 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.474407 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.475349 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.475371 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.475936 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.477228 1815551 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214022"
	I1013 13:56:02.477291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.477704 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.477781 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.478540 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.478577 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.479296 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.479320 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.479338 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 13:56:02.479831 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.481287 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.482030 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.482191 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 13:56:02.482988 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I1013 13:56:02.482206 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.483218 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.483796 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.484400 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.484415 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.485053 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485131 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485219 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 13:56:02.485513 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.485624 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.488111 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 13:56:02.489703 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 13:56:02.490084 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1013 13:56:02.490663 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.490763 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.491660 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I1013 13:56:02.491817 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492275 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.492498 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.492417 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.492699 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492943 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 13:56:02.493252 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.493468 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.493280 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1013 13:56:02.494093 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.494695 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.495079 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.495408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.497771 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 13:56:02.498011 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.499118 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 13:56:02.499863 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35551
	I1013 13:56:02.500453 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.500464 1815551 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 13:56:02.500482 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.501046 1815551 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:02.501426 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 13:56:02.501453 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502344 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 13:56:02.502360 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 13:56:02.502380 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502511 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:02.502523 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 13:56:02.502539 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502551 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.503704 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 13:56:02.504519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.504549 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.504971 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1013 13:56:02.505044 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34753
	I1013 13:56:02.505476 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.505935 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.506132 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.506402 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 13:56:02.506420 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 13:56:02.506441 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.507553 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.507571 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.510588 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1013 13:56:02.511014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.512055 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.513064 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I1013 13:56:02.513461 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I1013 13:56:02.513806 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I1013 13:56:02.514065 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514237 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46643
	I1013 13:56:02.514353 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514506 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.514833 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.515238 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.515280 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.515776 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.516060 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516139 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516152 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516158 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516229 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I1013 13:56:02.516543 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.516614 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.516690 1815551 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 13:56:02.517007 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.517014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517062 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517467 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.517483 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.517559 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.517562 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
	I1013 13:56:02.518311 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:02.518369 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 13:56:02.518393 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.518516 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.518540 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.518655 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519402 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519519 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.519628 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.519763 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.519831 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521182 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.521199 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1013 13:56:02.521204 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521239 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521254 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.521455 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521645 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.521859 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.522155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.522227 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.525058 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.526886 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.526989 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.527062 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.527172 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.527481 1815551 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:02.527499 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1013 13:56:02.527538 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.527916 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528591 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.530285 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530450 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528734 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530629 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.530633 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528801 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528997 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529220 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I1013 13:56:02.529385 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529699 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.530894 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.530917 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.531013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.529988 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.531257 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531828 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.532069 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.532264 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.532540 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.532554 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531749 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.533563 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 13:56:02.533622 1815551 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 13:56:02.533679 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535465 1815551 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 13:56:02.533809 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I1013 13:56:02.533885 1815551 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 13:56:02.533999 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.534123 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.534155 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535733 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.535024 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.536159 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.536202 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.536302 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.537059 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.537168 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537279 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I1013 13:56:02.537305 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 13:56:02.537322 1815551 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 13:56:02.537342 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.537456 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.537805 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537934 1815551 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:02.537945 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 13:56:02.537970 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538046 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 13:56:02.538056 1815551 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 13:56:02.538070 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538169 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.538186 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.538982 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:02.539022 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 13:56:02.539053 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.540639 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.541678 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.541498 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.541528 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.542401 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.542692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.541543 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.542639 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.542646 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.542566 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.543500 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.544260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.545374 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.545773 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.546359 1815551 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 13:56:02.546363 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 13:56:02.546634 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.546830 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I1013 13:56:02.547953 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.547975 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.548147 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.548267 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.548438 1815551 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:02.548451 1815551 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 13:56:02.548473 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548649 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 13:56:02.548665 1815551 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 13:56:02.548684 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548741 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.548751 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.548789 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 13:56:02.549764 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549774 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.549766 1815551 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 13:56:02.549808 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.549138 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549891 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549914 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549939 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.550519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.550541 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.550650 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551438 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551458 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.551469 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.551478 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551613 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551695 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.551911 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.551979 1815551 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 13:56:02.552033 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552921 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.552947 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.552922 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.552965 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.553027 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553037 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.553282 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.553338 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553396 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.553415 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553448 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553810 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.554101 1815551 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:02.554108 1815551 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 13:56:02.554116 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 13:56:02.554188 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.555708 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:02.555861 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 13:56:02.555886 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555860 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.555999 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.556383 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.556783 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.557013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.557193 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.558058 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.558134 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.559028 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.559068 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.559315 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.559492 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.559902 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.560012 1815551 out.go:179]   - Using image docker.io/busybox:stable
	I1013 13:56:02.560174 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.560282 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560454 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560952 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561186 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561489 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561738 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561760 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561891 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561942 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562049 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562133 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562208 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562304 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.562325 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.562663 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.562854 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.563028 1815551 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 13:56:02.563073 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.563249 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.564627 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:02.564650 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 13:56:02.564672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.568502 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569018 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.569056 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569235 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.569424 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.569582 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.569725 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:03.342481 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:56:03.342511 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 13:56:03.415927 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:03.502503 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:03.509312 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:03.553702 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 13:56:03.553739 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 13:56:03.554436 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 13:56:03.554458 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 13:56:03.558285 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 13:56:03.558305 1815551 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 13:56:03.648494 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:03.699103 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:03.779563 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:03.812678 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 13:56:03.812733 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 13:56:03.829504 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:03.832700 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:03.897242 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 13:56:03.897268 1815551 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 13:56:03.905550 1815551 node_ready.go:35] waiting up to 6m0s for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909125 1815551 node_ready.go:49] node "addons-214022" is "Ready"
	I1013 13:56:03.909165 1815551 node_ready.go:38] duration metric: took 3.564505ms for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909180 1815551 api_server.go:52] waiting for apiserver process to appear ...
	I1013 13:56:03.909241 1815551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 13:56:03.957336 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:04.136232 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:04.201240 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 13:56:04.201271 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 13:56:04.228704 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 13:56:04.228758 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 13:56:04.287683 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.287738 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 13:56:04.507887 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:04.507919 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 13:56:04.641317 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 13:56:04.641349 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 13:56:04.710332 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 13:56:04.710378 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 13:56:04.712723 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 13:56:04.712755 1815551 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 13:56:04.822157 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.887676 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:04.887707 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 13:56:04.968928 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:05.069666 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 13:56:05.069709 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 13:56:05.164254 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 13:56:05.164289 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 13:56:05.171441 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 13:56:05.171470 1815551 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 13:56:05.278956 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:05.595927 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 13:56:05.595960 1815551 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 13:56:05.703182 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 13:56:05.703221 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 13:56:05.763510 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:05.763544 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 13:56:06.065261 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:06.086528 1815551 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.086558 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 13:56:06.241763 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 13:56:06.241791 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 13:56:06.468347 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.948294 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 13:56:06.948335 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 13:56:07.247516 1815551 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.904962804s)
	I1013 13:56:07.247565 1815551 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1013 13:56:07.247597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.83162272s)
	I1013 13:56:07.247662 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.247685 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248180 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248198 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.248211 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.248221 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248546 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:07.248628 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248648 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.509546 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 13:56:07.509581 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 13:56:07.797697 1815551 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214022" context rescaled to 1 replicas
	I1013 13:56:08.114046 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 13:56:08.114078 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 13:56:08.819818 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:08.819848 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 13:56:08.894448 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:09.954565 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 13:56:09.954611 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:09.959281 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.959849 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:09.959886 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.960116 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:09.960364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:09.960569 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:09.960746 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:10.901573 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 13:56:11.367882 1815551 addons.go:238] Setting addon gcp-auth=true in "addons-214022"
	I1013 13:56:11.367958 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:11.368474 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.368530 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.384151 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I1013 13:56:11.384793 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.385376 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.385403 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.385815 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.386578 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.386622 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.401901 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I1013 13:56:11.402499 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.403178 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.403201 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.403629 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.403840 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:11.405902 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:11.406201 1815551 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 13:56:11.406233 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:11.409331 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409779 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:11.409810 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409983 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:11.410205 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:11.410408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:11.410637 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:13.559421 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.0568709s)
	I1013 13:56:13.559481 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559478 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (10.050128857s)
	I1013 13:56:13.559507 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.910967928s)
	I1013 13:56:13.559530 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559544 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559553 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.860416384s)
	I1013 13:56:13.559562 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559571 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559579 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559619 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.780022659s)
	I1013 13:56:13.559648 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559663 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559691 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.726948092s)
	I1013 13:56:13.559546 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559707 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559728 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559764 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.730231108s)
	I1013 13:56:13.559493 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559784 1815551 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (9.650528788s)
	I1013 13:56:13.559801 1815551 api_server.go:72] duration metric: took 11.181129031s to wait for apiserver process to appear ...
	I1013 13:56:13.559808 1815551 api_server.go:88] waiting for apiserver healthz status ...
	I1013 13:56:13.559830 1815551 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I1013 13:56:13.559992 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560020 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560048 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560055 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560063 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560071 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560072 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560083 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560090 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560098 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559785 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560331 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560332 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560338 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560345 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560391 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560394 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560400 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560407 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560410 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560412 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560425 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560447 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560450 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560456 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560461 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560464 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560467 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560491 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560508 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560613 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560624 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560903 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560967 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560976 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560987 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560995 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.561056 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561078 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561085 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561188 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561210 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561237 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561243 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561445 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561462 1815551 addons.go:479] Verifying addon ingress=true in "addons-214022"
	I1013 13:56:13.561689 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561732 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561739 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563431 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.563516 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563493 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.564138 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.564155 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.564164 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.566500 1815551 out.go:179] * Verifying ingress addon...
	I1013 13:56:13.568872 1815551 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 13:56:13.679959 1815551 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I1013 13:56:13.701133 1815551 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 13:56:13.701173 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:13.713292 1815551 api_server.go:141] control plane version: v1.34.1
	I1013 13:56:13.713342 1815551 api_server.go:131] duration metric: took 153.525188ms to wait for apiserver health ...
	I1013 13:56:13.713357 1815551 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 13:56:13.839550 1815551 system_pods.go:59] 15 kube-system pods found
	I1013 13:56:13.839596 1815551 system_pods.go:61] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:13.839608 1815551 system_pods.go:61] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839614 1815551 system_pods.go:61] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839621 1815551 system_pods.go:61] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:13.839626 1815551 system_pods.go:61] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:13.839631 1815551 system_pods.go:61] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:13.839643 1815551 system_pods.go:61] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:13.839649 1815551 system_pods.go:61] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:13.839655 1815551 system_pods.go:61] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:13.839662 1815551 system_pods.go:61] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:13.839676 1815551 system_pods.go:61] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:13.839684 1815551 system_pods.go:61] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:13.839690 1815551 system_pods.go:61] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:13.839698 1815551 system_pods.go:61] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:13.839701 1815551 system_pods.go:61] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:13.839708 1815551 system_pods.go:74] duration metric: took 126.345191ms to wait for pod list to return data ...
	I1013 13:56:13.839738 1815551 default_sa.go:34] waiting for default service account to be created ...
	I1013 13:56:13.942067 1815551 default_sa.go:45] found service account: "default"
	I1013 13:56:13.942106 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.942111 1815551 default_sa.go:55] duration metric: took 102.363552ms for default service account to be created ...
	I1013 13:56:13.942129 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.942130 1815551 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 13:56:13.942465 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.942473 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.942485 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:14.047220 1815551 system_pods.go:86] 15 kube-system pods found
	I1013 13:56:14.047259 1815551 system_pods.go:89] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:14.047272 1815551 system_pods.go:89] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047280 1815551 system_pods.go:89] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047291 1815551 system_pods.go:89] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:14.047297 1815551 system_pods.go:89] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:14.047303 1815551 system_pods.go:89] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:14.047311 1815551 system_pods.go:89] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:14.047316 1815551 system_pods.go:89] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:14.047323 1815551 system_pods.go:89] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:14.047333 1815551 system_pods.go:89] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:14.047343 1815551 system_pods.go:89] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:14.047360 1815551 system_pods.go:89] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:14.047368 1815551 system_pods.go:89] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:14.047377 1815551 system_pods.go:89] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:14.047386 1815551 system_pods.go:89] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:14.047403 1815551 system_pods.go:126] duration metric: took 105.264628ms to wait for k8s-apps to be running ...
	I1013 13:56:14.047417 1815551 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 13:56:14.047478 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 13:56:14.113581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:14.930679 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.130040 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.620233 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.296801 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.658297 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.084581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.640914 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.131818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.760793 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.821597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.86421149s)
	I1013 13:56:18.821631 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.685366971s)
	I1013 13:56:18.821668 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821748 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821787 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821872 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.9996555s)
	W1013 13:56:18.821914 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821934 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.852967871s)
	I1013 13:56:18.821959 1815551 retry.go:31] will retry after 212.802499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821975 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821989 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822111 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.543120613s)
	I1013 13:56:18.822130 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822146 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822157 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822250 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822256 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822259 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822273 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822291 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822289 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822274 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.756980139s)
	I1013 13:56:18.822314 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822260 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822299 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822334 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822345 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822325 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822357 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822331 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822386 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822394 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.354009404s)
	W1013 13:56:18.822426 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822447 1815551 retry.go:31] will retry after 341.080561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822631 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822646 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822660 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822666 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822674 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822684 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822691 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822702 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822726 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822801 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822818 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822890 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.928381136s)
	I1013 13:56:18.822936 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822947 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823037 1815551 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.416805726s)
	I1013 13:56:18.822701 1815551 addons.go:479] Verifying addon registry=true in "addons-214022"
	I1013 13:56:18.823408 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823442 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823449 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823457 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.823463 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823529 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823549 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823554 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823563 1815551 addons.go:479] Verifying addon metrics-server=true in "addons-214022"
	I1013 13:56:18.823922 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823939 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823978 1815551 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.776478568s)
	I1013 13:56:18.826440 1815551 system_svc.go:56] duration metric: took 4.779015598s WaitForService to wait for kubelet
	I1013 13:56:18.826457 1815551 kubeadm.go:586] duration metric: took 16.447782815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:56:18.826480 1815551 node_conditions.go:102] verifying NodePressure condition ...
	I1013 13:56:18.824018 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.824271 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.826526 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.826549 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.826556 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.826909 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:18.827041 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.827056 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.827324 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.827349 1815551 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:18.827631 1815551 out.go:179] * Verifying registry addon...
	I1013 13:56:18.827639 1815551 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214022 service yakd-dashboard -n yakd-dashboard
	
	I1013 13:56:18.828579 1815551 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 13:56:18.830389 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 13:56:18.830649 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 13:56:18.831072 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 13:56:18.831622 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 13:56:18.831641 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 13:56:18.904373 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 13:56:18.904404 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 13:56:18.958203 1815551 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 13:56:18.958240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:18.968879 1815551 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 13:56:18.968905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:18.980574 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:18.980605 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 13:56:18.989659 1815551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 13:56:18.989692 1815551 node_conditions.go:123] node cpu capacity is 2
	I1013 13:56:18.989704 1815551 node_conditions.go:105] duration metric: took 163.213438ms to run NodePressure ...
	I1013 13:56:18.989726 1815551 start.go:241] waiting for startup goroutines ...
	I1013 13:56:19.035462 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:19.044517 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:19.044541 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:19.044887 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:19.044920 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:19.044937 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:19.076791 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:19.115345 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.164325 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:19.492227 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.492514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:19.578775 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.860209 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.860435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.075338 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.338880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.339590 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.591872 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.839272 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.840410 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.147212 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.341334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:21.342792 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.576751 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.816476 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.780960002s)
	W1013 13:56:21.816548 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816583 1815551 retry.go:31] will retry after 241.635364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816594 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.739753765s)
	I1013 13:56:21.816659 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.816682 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.652313132s)
	I1013 13:56:21.816724 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816742 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817049 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817064 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817072 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817094 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817135 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817206 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817222 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817231 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817240 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817331 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817362 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817373 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817637 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.820100 1815551 addons.go:479] Verifying addon gcp-auth=true in "addons-214022"
	I1013 13:56:21.822251 1815551 out.go:179] * Verifying gcp-auth addon...
	I1013 13:56:21.824621 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 13:56:21.835001 1815551 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 13:56:21.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:21.838795 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.840850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.059249 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:22.077627 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.330307 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.336339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.337042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:22.574406 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.832108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.838566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.838826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 13:56:22.914754 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:22.914802 1815551 retry.go:31] will retry after 760.892054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:23.073359 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.329443 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.336062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:23.336518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.576107 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.676911 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:23.852063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.852111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.852394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.075386 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:24.331600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.340818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:24.343374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.572818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:24.620054 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.620094 1815551 retry.go:31] will retry after 1.157322101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.831852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.836023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.836880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.073842 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.328390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.335179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:25.337258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.650194 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.777621 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:25.840280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.846148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.847000 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.073966 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:26.329927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.335473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.335806 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.575967 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:26.717807 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.717838 1815551 retry.go:31] will retry after 1.353453559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.828801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.834019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.836503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.073185 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.329339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.337730 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.338165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.576514 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.828768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.835828 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.836163 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.071440 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:28.372264 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.372321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.373313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:28.374357 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.576799 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.830178 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.839906 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.841861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 13:56:29.026067 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.026119 1815551 retry.go:31] will retry after 2.314368666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.075636 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.331372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.334421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:29.336311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.574567 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.828489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.836190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.836214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.073854 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.328358 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.337153 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:30.572800 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.829360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.836930 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.838278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.115447 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.341310 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:31.386485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.389205 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:31.390131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.594587 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.838151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.859495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.859525 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.074372 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.329175 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.337700 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.340721 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.450731 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109365647s)
	W1013 13:56:32.450775 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.450795 1815551 retry.go:31] will retry after 3.150290355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.578006 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.830600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.835361 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.837984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.072132 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.330611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.336957 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.338768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:33.576304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.832311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.837282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.839687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.073260 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.328435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.335455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.338454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:34.573208 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.829194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.836540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.838519 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.073549 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.329626 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:35.336677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.573553 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.601692 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:35.833491 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.847288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.853015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.073279 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.332575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.339486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.345783 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.575174 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.831613 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.838390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.839346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.873620 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.271867515s)
	W1013 13:56:36.873678 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:36.873707 1815551 retry.go:31] will retry after 2.895058592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:37.073691 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.328849 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.335191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.337850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:37.572952 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.830399 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.834346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.835091 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.074246 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.329068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.334746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:38.336761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.574900 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.829389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.836693 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.838345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.073278 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.329302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.339598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.340006 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:39.572295 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.769464 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:39.829653 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.836342 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.836508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.073770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.329739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.334329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.336269 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.691416 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.831148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.837541 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.839843 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.983908 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.214399822s)
	W1013 13:56:40.983958 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:40.983985 1815551 retry.go:31] will retry after 7.225185704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:41.073163 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.329997 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.335409 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.338433 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:41.666422 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.829493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.835176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.835834 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.072985 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.330254 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.339275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.343430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.574234 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.831039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.835619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.838197 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.072757 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.328191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.337547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.337556 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.573563 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.840684 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.842458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.848748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.073791 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.328352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.335902 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.337655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:44.575764 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.834421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.839189 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.844388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.073743 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.328774 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.336100 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:45.336438 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.601555 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.830165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.835830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.838487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.074421 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.328961 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.334499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.335387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:46.574665 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.829543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.835535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.837472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.076871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.328763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.335050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:47.337454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.572647 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.829879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.834618 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.837273 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.082833 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.210068 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:48.329748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.336813 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.339418 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.577288 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.957818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.960308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.964374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.076388 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.310522 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.100404712s)
	W1013 13:56:49.310569 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.310590 1815551 retry.go:31] will retry after 8.278511579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.333318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.335452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.338043 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.577394 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.830452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.835251 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.837381 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.073417 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.329558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:50.339077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.574733 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.830760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.835530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.077542 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.331547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.335448 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:51.336576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.572984 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.829083 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.837328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.072950 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.329542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.335485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.335539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.572971 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.828509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.836901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.837310 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.074048 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.333265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.335372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.336434 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.574864 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.830933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.838072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.839851 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.074866 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.338983 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.339799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:54.344377 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.574702 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.828114 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.837122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.074420 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:55.329544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:55.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.336305 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:55.578331 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.005987 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.006040 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.008625 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.083827 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.328560 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.335079 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.335136 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.575579 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.830373 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.835033 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.835179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.087195 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.332845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.337372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.338029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.576538 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.589639 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:57.830334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.836937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.838662 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.112247 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.336059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.348974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.350146 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.573280 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.842857 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.842873 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.842888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.924998 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.335308989s)
	W1013 13:56:58.925066 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:58.925097 1815551 retry.go:31] will retry after 13.924020767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:59.072616 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.329181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.335127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.335993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:59.575343 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.830551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.836400 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.837278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.078387 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.333707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.375230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:00.376823 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.572444 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.829334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.835575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.835799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.079304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.330385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.335250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.581487 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.837221 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.837449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.078263 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:02.330056 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:02.339092 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.339093 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:02.577091 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.077029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.077446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.077527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.154987 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.328809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.335973 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.336466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.574053 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.832304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.836898 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.837250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.072871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.329704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.335648 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:04.573740 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.828297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.838545 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.839359 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.073273 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.331167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.337263 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:05.339875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.572747 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.831331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.842003 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.930357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.076706 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.328910 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.336063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.343356 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:06.584114 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.830148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.835936 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.837800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.073829 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.332895 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.335938 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:07.336485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.573658 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.829535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.834609 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.841665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.077534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.328984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.333490 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.335036 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.574315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.830309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.838864 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.075894 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.330037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.335138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.336913 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:09.572525 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.828315 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.835125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.835169 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.074415 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.330449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.334152 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.338372 1815551 kapi.go:107] duration metric: took 51.507291615s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 13:57:10.573600 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.829312 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.834624 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.073690 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.329540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.334164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.575859 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.829406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.834682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.073929 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.328430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.335019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.574762 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.828887 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.833318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.849353 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:13.075935 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:13.329099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.336236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:13.573534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:57:13.587679 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.587745 1815551 retry.go:31] will retry after 13.672716628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.828261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.835435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.073229 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.328789 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.334388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.573428 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.829403 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.834752 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.074458 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.330167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.334526 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.573869 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.828247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.834508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.073598 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.329584 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.335058 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.573770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.834668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.073034 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.330112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.334151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.572834 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.827923 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.834428 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.074227 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.332800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.338122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.574366 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.829944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.835390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.073063 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.330933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.334816 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.578792 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.829059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.834174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.073867 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.328553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.335769 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.577315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.828820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.834111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.074340 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.348186 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.348277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.577133 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.828486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.835130 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.074094 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.329573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.336976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.576302 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.829112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.073276 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.332360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.574812 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.828888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.836976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.073895 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:24.329298 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.345232 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.573291 1815551 kapi.go:107] duration metric: took 1m11.00441945s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 13:57:24.829727 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.328687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.335809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.830863 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.833805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.829658 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.834781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.261314 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:27.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.335935 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.840969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.841226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.331295 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.336284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.567555 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.306188084s)
	W1013 13:57:28.567634 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:28.567738 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.567757 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568060 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568121 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568134 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:57:28.568150 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.568163 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568426 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568464 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568475 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	W1013 13:57:28.568614 1815551 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 13:57:28.828678 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.834833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.329605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:29.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.829667 1815551 kapi.go:107] duration metric: took 1m8.005042215s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 13:57:29.831603 1815551 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214022 cluster.
	I1013 13:57:29.832969 1815551 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 13:57:29.834368 1815551 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 13:57:29.835165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.335102 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.834820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.337927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.836162 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.334652 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.335329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.836940 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.335265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.835299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.334493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.835958 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.336901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.836037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.334865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.335331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.835376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.334760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.835451 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.335213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.835487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.334559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.835709 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.336510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.835078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.334427 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.835800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.836213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.835870 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.336474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.835258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.335636 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.335125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.835336 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.334300 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.834511 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.334734 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.336483 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.835357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.334098 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.834039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.336018 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.836261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.334061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.834919 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.334649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.835154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.336354 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.834937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.335025 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.835808 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.835220 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.335287 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.336327 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.836514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.835391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.335754 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.834954 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.337125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.836950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.335741 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.835238 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.334514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.836800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.335199 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.834223 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.334374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.834313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.335017 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.836739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.836138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.837760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.335601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.834423 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.335277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.334190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.835779 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.335566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.834803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.335076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.834352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.337145 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.836318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.335627 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.834879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.335150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.834450 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.335022 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.836226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.335160 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.836271 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.335103 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.335568 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.836839 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.335318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.836164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.334826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.835127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.336865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.836135 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.336673 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.835150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.334589 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.834578 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.335334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.835296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.335639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.836101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.334964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.835761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.335325 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.836391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.335041 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.836020 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.335603 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.834446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.336822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.835728 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.834134 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.335154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.836561 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.336212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.835791 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.335558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.835276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.335841 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.836019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.334744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.834701 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.335446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.835594 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.337105 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.834479 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.335535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.835194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.335256 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.336078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.835454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.335291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.835631 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.336375 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.835517 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.335533 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.835668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.334675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.836765 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.335738 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.835614 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.334992 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.834761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.835039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.335024 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.835393 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.335510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.834835 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.335247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.835193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.337646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.334671 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.835950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.335072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.835262 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.336068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.838250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.336473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.834196 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.835516 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.336890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.336117 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.835027 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.336076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.835382 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.334500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.835763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.335780 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.834829 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.335922 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.835807 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.335268 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.835042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.334861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.835742 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.835542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.336308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.834819 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.334458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.834430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.334302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.834698 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.335242 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.837355 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.334901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.835822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.335481 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.835077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.335379 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.835858 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.335030 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.334406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.835970 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.336845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.835639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.334566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.834610 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.335758 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.834181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.335230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.836521 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.335115 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.834296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.334011 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.835572 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.334655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.837467 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.334547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.835937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.335478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.334801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.834872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.834089 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.334927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.835775 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.334557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.834110 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.336120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.835608 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.338054 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.835852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.335214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.835500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.334478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.835206 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.335016 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.835509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.334080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.835482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.336619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.835408 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.334489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.834778 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.334764 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.836472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.834969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.335466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.336616 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.835557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.335389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.837280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.335407 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.835989 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.334416 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.336883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.835437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.334771 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.836376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.334601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.334699 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.834770 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.334874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.835696 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.335335 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.836061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.334551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.836309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.335167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.835702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.334763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.335505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.835798 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.335506 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.836329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.335321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.834801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.334908 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.835943 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.335962 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.836396 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.335654 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.835633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.335803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.835579 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.334633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.335151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.835600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.336050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.835564 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.335649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.335190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.834455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.334544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.835370 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.834672 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.334781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.834666 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.835748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.335284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.835158 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.337417 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.835644 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.335243 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.835634 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.335832 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.836076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.336097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.835499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.334133 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.334598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.835174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.335615 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.835346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.334875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.835362 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.335392 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.835890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.336384 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.835565 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.334702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.836069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.335345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.835340 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.338240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.836180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.336383 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.835503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.836328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.333988 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.335216 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.836465 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.836108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.336180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.836086 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.335099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.836475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.334621 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.334707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.835907 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.336386 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.834665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.334390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.333981 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.836628 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.335276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.334588 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.835824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.338905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.836639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.335704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.835552 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.334682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.335635 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.835001 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.334830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.834874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.336549 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.838494 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.335810 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.834944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.335374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.834675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.335833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.836291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.334291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.835818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.335302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.836497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.334553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.834695 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.335580 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.336475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.834974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.335889 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.835181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.336380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.336442 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.834531 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.335397 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.834456 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.337231 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.834677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.335412 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.336539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.835527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.835688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.335233 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.335877 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.836559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.335297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.837219 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.336121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.834649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.336482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.335108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.335574 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.335903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.835661 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.337729 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.835944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.834840 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.336497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.835735 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.336414 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.835122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.335039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.835080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.336069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.335177 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.835351 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.335126 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.835180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.835406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.334198 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.336224 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.836107 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.336440 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.336101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.835094 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.334705 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.836586 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.335865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.336836 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.836236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.334530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.836132 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.834953 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.336330 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.834343 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.334470 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.835865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.336394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.834746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.336193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.835282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.334495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.835755 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.835573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.335010 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.835070 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.337081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.836917 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.336075 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.836303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.335543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.336304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.835123 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.334303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.836073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.337121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.335474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.835147 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.834679 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.334975 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.835505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.335547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.834320 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.337072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.835338 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.334677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.835088 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.334605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.834688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.835956 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.336504 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.836995 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.335212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.834385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.335476 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.835502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.836012 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.335744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.834380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.335240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.835337 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.335893 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.834524 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.334081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.835413 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.334814 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.834505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.335015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.835005 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.336275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.835387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.335267 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.835234 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.335689 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.336968 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.835611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.835927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.337411 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.834441 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.335062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.835993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.336191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.831884 1815551 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1013 14:02:18.831927 1815551 kapi.go:107] duration metric: took 6m0.001279478s to wait for kubernetes.io/minikube-addons=registry ...
	W1013 14:02:18.832048 1815551 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1013 14:02:18.834028 1815551 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, default-storageclass, volcano, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, csi-hostpath-driver, ingress, gcp-auth
	I1013 14:02:18.835547 1815551 addons.go:514] duration metric: took 6m16.456841938s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin default-storageclass volcano metrics-server yakd storage-provisioner-rancher volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I1013 14:02:18.835619 1815551 start.go:246] waiting for cluster config update ...
	I1013 14:02:18.835653 1815551 start.go:255] writing updated cluster config ...
	I1013 14:02:18.835985 1815551 ssh_runner.go:195] Run: rm -f paused
	I1013 14:02:18.844672 1815551 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:18.850989 1815551 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.858822 1815551 pod_ready.go:94] pod "coredns-66bc5c9577-h4thg" is "Ready"
	I1013 14:02:18.858851 1815551 pod_ready.go:86] duration metric: took 7.830127ms for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.861510 1815551 pod_ready.go:83] waiting for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.866947 1815551 pod_ready.go:94] pod "etcd-addons-214022" is "Ready"
	I1013 14:02:18.866978 1815551 pod_ready.go:86] duration metric: took 5.438269ms for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.870108 1815551 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.876071 1815551 pod_ready.go:94] pod "kube-apiserver-addons-214022" is "Ready"
	I1013 14:02:18.876101 1815551 pod_ready.go:86] duration metric: took 5.952573ms for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.879444 1815551 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.250700 1815551 pod_ready.go:94] pod "kube-controller-manager-addons-214022" is "Ready"
	I1013 14:02:19.250743 1815551 pod_ready.go:86] duration metric: took 371.273475ms for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.452146 1815551 pod_ready.go:83] waiting for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.850363 1815551 pod_ready.go:94] pod "kube-proxy-m9kg9" is "Ready"
	I1013 14:02:19.850396 1815551 pod_ready.go:86] duration metric: took 398.220518ms for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.050567 1815551 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449725 1815551 pod_ready.go:94] pod "kube-scheduler-addons-214022" is "Ready"
	I1013 14:02:20.449765 1815551 pod_ready.go:86] duration metric: took 399.169231ms for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449779 1815551 pod_ready.go:40] duration metric: took 1.605053066s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:20.499765 1815551 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 14:02:20.501422 1815551 out.go:179] * Done! kubectl is now configured to use "addons-214022" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	4b9c2b1e8388b       56cc512116c8f       6 minutes ago       Running             busybox                                  0                   c2017033bd492       busybox
	d6a3c830fdead       1bec18b3728e7       17 minutes ago      Running             controller                               0                   b82d6ab22225e       ingress-nginx-controller-9cc49f96f-7jf8g
	dc9eac6946abb       738351fd438f0       18 minutes ago      Running             csi-snapshotter                          0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	caf59fa52cf6c       931dbfd16f87c       18 minutes ago      Running             csi-provisioner                          0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	dcdb3cedeedc5       e899260153aed       18 minutes ago      Running             liveness-probe                           0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	20320037960be       e255e073c508c       18 minutes ago      Running             hostpath                                 0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	251c9387cb3f1       88ef14a257f42       18 minutes ago      Running             node-driver-registrar                    0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	4bf53d30ff2bf       19a639eda60f0       18 minutes ago      Running             csi-resizer                              0                   38173b2da332e       csi-hostpath-resizer-0
	da92c998f6d36       a1ed5895ba635       18 minutes ago      Running             csi-external-health-monitor-controller   0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	fdb740423cae7       aa61ee9c70bc4       18 minutes ago      Running             volume-snapshot-controller               0                   d87f7092f76cb       snapshot-controller-7d9fbc56b8-fcqg8
	d9300160a8179       59cbb42146a37       18 minutes ago      Running             csi-attacher                             0                   1571308a93146       csi-hostpath-attacher-0
	59dcea13b91a7       aa61ee9c70bc4       18 minutes ago      Running             volume-snapshot-controller               0                   fc7a88bf2bbfa       snapshot-controller-7d9fbc56b8-pnqwn
	ac9ca79606b04       8c217da6734db       18 minutes ago      Exited              patch                                    0                   82e54969531ac       ingress-nginx-admission-patch-kvlpb
	fc2247488ceef       8c217da6734db       18 minutes ago      Exited              create                                   0                   249a7d7c465c4       ingress-nginx-admission-create-rn6ng
	ade8e5a3e89a5       38dca7434d5f2       18 minutes ago      Running             gadget                                   0                   cd47cb2e122c6       gadget-lrthv
	55e4c7d9441ba       b1c9f9ef5f0c2       18 minutes ago      Running             registry-proxy                           0                   dbfd8a2965678       registry-proxy-qdl2b
	11373ec0dad23       b6ab53fbfedaa       18 minutes ago      Running             minikube-ingress-dns                     0                   25d666aa48ee6       kube-ingress-dns-minikube
	61d2e3b41e535       6e38f40d628db       19 minutes ago      Running             storage-provisioner                      0                   c3fcdfcb3c777       storage-provisioner
	e93bcf6b41d34       d5e667c0f2bb6       19 minutes ago      Running             amd-gpu-device-plugin                    0                   dd63ea4bfdd23       amd-gpu-device-plugin-k6tpl
	836109d2ab5d3       52546a367cc9e       19 minutes ago      Running             coredns                                  0                   475cb9ba95a73       coredns-66bc5c9577-h4thg
	0daa3279505d6       fc25172553d79       19 minutes ago      Running             kube-proxy                               0                   85474e9f38355       kube-proxy-m9kg9
	05cee8f966b49       c80c8dbafe7dd       19 minutes ago      Running             kube-controller-manager                  0                   03c96ff8163c4       kube-controller-manager-addons-214022
	b4ca1f4c451a7       5f1f5298c888d       19 minutes ago      Running             etcd                                     0                   f69d756c4a41d       etcd-addons-214022
	84834930aaa27       7dd6aaa1717ab       19 minutes ago      Running             kube-scheduler                           0                   246bc566c0147       kube-scheduler-addons-214022
	da79537fc9aee       c3994bc696102       19 minutes ago      Running             kube-apiserver                           0                   6b21f01e5cdd5       kube-apiserver-addons-214022
	
	
	==> containerd <==
	Oct 13 14:14:38 addons-214022 containerd[816]: time="2025-10-13T14:14:38.380293733Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:14:38 addons-214022 containerd[816]: time="2025-10-13T14:14:38.479291104Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:14:38 addons-214022 containerd[816]: time="2025-10-13T14:14:38.569564325Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:14:38 addons-214022 containerd[816]: time="2025-10-13T14:14:38.569663145Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10965"
	Oct 13 14:14:56 addons-214022 containerd[816]: time="2025-10-13T14:14:56.376564162Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 13 14:14:56 addons-214022 containerd[816]: time="2025-10-13T14:14:56.379598965Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:14:56 addons-214022 containerd[816]: time="2025-10-13T14:14:56.456660083Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:14:56 addons-214022 containerd[816]: time="2025-10-13T14:14:56.582505085Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:14:56 addons-214022 containerd[816]: time="2025-10-13T14:14:56.582594331Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.802219760Z" level=info msg="StopPodSandbox for \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\""
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.831572356Z" level=info msg="TearDown network for sandbox \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\" successfully"
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.831658462Z" level=info msg="StopPodSandbox for \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\" returns successfully"
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.832279365Z" level=info msg="RemovePodSandbox for \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\""
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.832320473Z" level=info msg="Forcibly stopping sandbox \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\""
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.863063241Z" level=info msg="TearDown network for sandbox \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\" successfully"
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.870213966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.870308331Z" level=info msg="RemovePodSandbox \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\" returns successfully"
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.871522897Z" level=info msg="StopPodSandbox for \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\""
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.912600496Z" level=info msg="TearDown network for sandbox \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\" successfully"
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.912716456Z" level=info msg="StopPodSandbox for \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\" returns successfully"
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.913337955Z" level=info msg="RemovePodSandbox for \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\""
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.914038148Z" level=info msg="Forcibly stopping sandbox \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\""
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.950612455Z" level=info msg="TearDown network for sandbox \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\" successfully"
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.957710896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 13 14:14:58 addons-214022 containerd[816]: time="2025-10-13T14:14:58.957823953Z" level=info msg="RemovePodSandbox \"b07165834017ed8e56090fcc5947df423c273995bd9c94bd3fbe92a72ad5d731\" returns successfully"
	
	
	==> coredns [836109d2ab5d3098ccc6f029d103e56da702d50a57e73f14a97ae3b019a5fa1c] <==
	[INFO] 10.244.0.8:51391 - 10473 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000252155s
	[INFO] 10.244.0.8:59873 - 60140 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000149804s
	[INFO] 10.244.0.8:59873 - 1616 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000288142s
	[INFO] 10.244.0.8:59873 - 51054 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000105111s
	[INFO] 10.244.0.8:59873 - 54048 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000842614s
	[INFO] 10.244.0.8:59873 - 845 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000078719s
	[INFO] 10.244.0.8:59873 - 2896 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000083731s
	[INFO] 10.244.0.8:59873 - 60358 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000097779s
	[INFO] 10.244.0.8:59873 - 35047 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000345687s
	[INFO] 10.244.0.8:57330 - 34497 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000160333s
	[INFO] 10.244.0.8:57330 - 50980 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000520561s
	[INFO] 10.244.0.8:57330 - 33206 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000129086s
	[INFO] 10.244.0.8:57330 - 12085 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000336273s
	[INFO] 10.244.0.8:57330 - 17597 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000246451s
	[INFO] 10.244.0.8:57330 - 9111 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000272715s
	[INFO] 10.244.0.8:57330 - 28158 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000123731s
	[INFO] 10.244.0.8:57330 - 17609 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000156642s
	[INFO] 10.244.0.8:41963 - 22396 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000248343s
	[INFO] 10.244.0.8:41963 - 15464 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000495333s
	[INFO] 10.244.0.8:41963 - 38217 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000091434s
	[INFO] 10.244.0.8:41963 - 33846 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000095373s
	[INFO] 10.244.0.8:41963 - 7714 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000305657s
	[INFO] 10.244.0.8:41963 - 12408 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000265532s
	[INFO] 10.244.0.8:41963 - 42823 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000087987s
	[INFO] 10.244.0.8:41963 - 1985 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000351379s
	
	
	==> describe nodes <==
	Name:               addons-214022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=addons-214022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214022
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-214022"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 13:55:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:15:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:15:01 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:15:01 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:15:01 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:15:01 +0000   Mon, 13 Oct 2025 13:55:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-214022
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 c368161c275346d2a9ea3f8a7f4ac862
	  System UUID:                c368161c-2753-46d2-a9ea-3f8a7f4ac862
	  Boot ID:                    687454d4-3e74-47c7-85c1-524150a13269
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-lrthv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-7jf8g    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         19m
	  kube-system                 amd-gpu-device-plugin-k6tpl                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-h4thg                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     19m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 csi-hostpathplugin-4jxqs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-addons-214022                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         19m
	  kube-system                 kube-apiserver-addons-214022                250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-addons-214022       200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-m9kg9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-addons-214022                100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 registry-66898fdd98-qpt8q                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 registry-proxy-qdl2b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 snapshot-controller-7d9fbc56b8-fcqg8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 snapshot-controller-7d9fbc56b8-pnqwn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                kubelet          Node addons-214022 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node addons-214022 event: Registered Node addons-214022 in Controller
	
	
	==> dmesg <==
	[  +0.188548] kauditd_printk_skb: 340 callbacks suppressed
	[ +10.023317] kauditd_printk_skb: 173 callbacks suppressed
	[ +11.926739] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.270838] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.901459] kauditd_printk_skb: 26 callbacks suppressed
	[Oct13 13:57] kauditd_printk_skb: 117 callbacks suppressed
	[  +1.255372] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.000037] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.136427] kauditd_printk_skb: 50 callbacks suppressed
	[  +4.193430] kauditd_printk_skb: 68 callbacks suppressed
	[Oct13 14:08] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000059] kauditd_printk_skb: 65 callbacks suppressed
	[ +12.058507] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000136] kauditd_printk_skb: 22 callbacks suppressed
	[Oct13 14:09] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.303382] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.474208] kauditd_printk_skb: 49 callbacks suppressed
	[Oct13 14:10] kauditd_printk_skb: 90 callbacks suppressed
	[Oct13 14:11] kauditd_printk_skb: 9 callbacks suppressed
	[ +15.690633] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.656333] kauditd_printk_skb: 21 callbacks suppressed
	[Oct13 14:13] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.000051] kauditd_printk_skb: 9 callbacks suppressed
	[Oct13 14:14] kauditd_printk_skb: 26 callbacks suppressed
	[ +24.933780] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [b4ca1f4c451a74c7ea64ca0e34512e160fbd260fd3969afb6e67fca08f49102b] <==
	{"level":"info","ts":"2025-10-13T13:57:03.066329Z","caller":"traceutil/trace.go:172","msg":"trace[1337303940] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"235.769671ms","start":"2025-10-13T13:57:02.830551Z","end":"2025-10-13T13:57:03.066321Z","steps":["trace[1337303940] 'range keys from in-memory index tree'  (duration: 235.56325ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.066781Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.221636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:03.066824Z","caller":"traceutil/trace.go:172","msg":"trace[1790166720] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"236.26612ms","start":"2025-10-13T13:57:02.830551Z","end":"2025-10-13T13:57:03.066818Z","steps":["trace[1790166720] 'range keys from in-memory index tree'  (duration: 236.097045ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315015Z","caller":"traceutil/trace.go:172","msg":"trace[940649486] linearizableReadLoop","detail":"{readStateIndex:1392; appliedIndex:1392; }","duration":"127.017691ms","start":"2025-10-13T13:57:23.187982Z","end":"2025-10-13T13:57:23.314999Z","steps":["trace[940649486] 'read index received'  (duration: 127.006943ms)","trace[940649486] 'applied index is now lower than readState.Index'  (duration: 4.937µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T13:57:23.315177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.178772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:23.315206Z","caller":"traceutil/trace.go:172","msg":"trace[2128069664] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:1356; }","duration":"127.222714ms","start":"2025-10-13T13:57:23.187978Z","end":"2025-10-13T13:57:23.315201Z","steps":["trace[2128069664] 'agreement among raft nodes before linearized reading'  (duration: 127.149155ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315263Z","caller":"traceutil/trace.go:172","msg":"trace[1733438696] transaction","detail":"{read_only:false; response_revision:1357; number_of_response:1; }","duration":"135.233261ms","start":"2025-10-13T13:57:23.180019Z","end":"2025-10-13T13:57:23.315253Z","steps":["trace[1733438696] 'process raft request'  (duration: 135.141996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:05:52.467650Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1907}
	{"level":"info","ts":"2025-10-13T14:05:52.575208Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1907,"took":"105.568434ms","hash":1304879421,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4886528,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2025-10-13T14:05:52.575710Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1304879421,"revision":1907,"compact-revision":-1}
	{"level":"info","ts":"2025-10-13T14:09:13.842270Z","caller":"traceutil/trace.go:172","msg":"trace[1885689359] linearizableReadLoop","detail":"{readStateIndex:3177; appliedIndex:3177; }","duration":"274.560471ms","start":"2025-10-13T14:09:13.567649Z","end":"2025-10-13T14:09:13.842209Z","steps":["trace[1885689359] 'read index received'  (duration: 274.551109ms)","trace[1885689359] 'applied index is now lower than readState.Index'  (duration: 8.253µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:13.906716Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.580668ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T14:09:13.906823Z","caller":"traceutil/trace.go:172","msg":"trace[1704629397] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2982; }","duration":"187.730839ms","start":"2025-10-13T14:09:13.719077Z","end":"2025-10-13T14:09:13.906808Z","steps":["trace[1704629397] 'range keys from in-memory index tree'  (duration: 187.538324ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T14:09:13.906973Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"339.314013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ipaddresses/10.101.151.157\" limit:1 ","response":"range_response_count:1 size:604"}
	{"level":"info","ts":"2025-10-13T14:09:13.907424Z","caller":"traceutil/trace.go:172","msg":"trace[692800306] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2983; }","duration":"346.864291ms","start":"2025-10-13T14:09:13.560497Z","end":"2025-10-13T14:09:13.907361Z","steps":["trace[692800306] 'process raft request'  (duration: 281.825137ms)","trace[692800306] 'compare'  (duration: 64.828079ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T14:09:13.907508Z","caller":"traceutil/trace.go:172","msg":"trace[107743050] range","detail":"{range_begin:/registry/ipaddresses/10.101.151.157; range_end:; response_count:1; response_revision:2982; }","duration":"339.484538ms","start":"2025-10-13T14:09:13.567635Z","end":"2025-10-13T14:09:13.907120Z","steps":["trace[107743050] 'agreement among raft nodes before linearized reading'  (duration: 274.852745ms)","trace[107743050] 'range keys from in-memory index tree'  (duration: 64.106294ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:13.907801Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.567617Z","time spent":"339.918526ms","remote":"127.0.0.1:33944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":627,"request content":"key:\"/registry/ipaddresses/10.101.151.157\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T14:09:13.908101Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.560488Z","time spent":"346.985335ms","remote":"127.0.0.1:33882","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":61,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" mod_revision:2971 > success:<request_delete_range:<key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" > > failure:<request_range:<key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" > >"}
	{"level":"info","ts":"2025-10-13T14:09:13.908220Z","caller":"traceutil/trace.go:172","msg":"trace[2073246272] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2983; }","duration":"347.573522ms","start":"2025-10-13T14:09:13.560640Z","end":"2025-10-13T14:09:13.908213Z","steps":["trace[2073246272] 'process raft request'  (duration: 346.576205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T14:09:13.908282Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.560629Z","time spent":"347.615581ms","remote":"127.0.0.1:33684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":59,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/metrics-server\" mod_revision:2972 > success:<request_delete_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > > failure:<request_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > >"}
	{"level":"warn","ts":"2025-10-13T14:09:13.910053Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.064409ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T14:09:13.910727Z","caller":"traceutil/trace.go:172","msg":"trace[1060924441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2983; }","duration":"217.741397ms","start":"2025-10-13T14:09:13.692976Z","end":"2025-10-13T14:09:13.910718Z","steps":["trace[1060924441] 'agreement among raft nodes before linearized reading'  (duration: 216.722483ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:10:52.476707Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2368}
	{"level":"info","ts":"2025-10-13T14:10:52.510907Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2368,"took":"32.98551ms","hash":1037835104,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5537792,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-13T14:10:52.510982Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1037835104,"revision":2368,"compact-revision":1907}
	
	
	==> kernel <==
	 14:15:17 up 19 min,  0 users,  load average: 0.39, 0.73, 0.70
	Linux addons-214022 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [da79537fc9aee4eda997318cc0aeef07f5a4e3bbd4aed4282ff9e486eecb0cd7] <==
	I1013 14:08:25.024102       1 handler.go:285] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.588117       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.763275       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1013 14:08:25.806287       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1013 14:08:25.836075       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.910579       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.938831       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W1013 14:08:26.095661       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1013 14:08:26.314291       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:26.607638       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	I1013 14:08:26.637481       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:26.689652       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1013 14:08:26.941141       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1013 14:08:26.941574       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1013 14:08:26.961310       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	I1013 14:08:27.080209       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:27.138121       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1013 14:08:28.080963       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1013 14:08:28.086493       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1013 14:08:45.022422       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:40132: use of closed network connection
	E1013 14:08:45.229592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:40168: use of closed network connection
	I1013 14:08:54.741628       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.41.148"}
	I1013 14:09:48.903970       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1013 14:11:31.775897       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1013 14:11:31.990340       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.79.22"}
	
	
	==> kube-controller-manager [05cee8f966b4938e3d1606d404d9401b9949f288ba68c08a76c3856610945ee7] <==
	E1013 14:14:31.578343       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:31.579874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:31.666963       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:31.668799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:33.574779       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:33.576460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:41.981944       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:41.983086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:46.447147       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:46.448784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:46.538730       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1013 14:14:48.845352       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:48.847580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:56.078227       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:56.079905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:14:56.950762       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:14:56.952652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:15:01.537835       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1013 14:15:12.494513       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:15:12.495920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:15:13.491481       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:15:13.492844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:15:13.867446       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:15:13.869141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:15:16.538182       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [0daa3279505d674c83f3e6813f82b58744dbeede0c9d8a5f5e902c9d9cca7441] <==
	I1013 13:56:04.284946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 13:56:04.385972       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 13:56:04.386554       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.214"]
	E1013 13:56:04.387583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 13:56:04.791284       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 13:56:04.792086       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 13:56:04.792127       1 server_linux.go:132] "Using iptables Proxier"
	I1013 13:56:04.830526       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 13:56:04.832819       1 server.go:527] "Version info" version="v1.34.1"
	I1013 13:56:04.832853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 13:56:04.853725       1 config.go:200] "Starting service config controller"
	I1013 13:56:04.853757       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 13:56:04.853901       1 config.go:106] "Starting endpoint slice config controller"
	I1013 13:56:04.853927       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 13:56:04.854547       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 13:56:04.854575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 13:56:04.862975       1 config.go:309] "Starting node config controller"
	I1013 13:56:04.863007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 13:56:04.863015       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 13:56:04.956286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 13:56:04.956330       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 13:56:04.957110       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [84834930aaa277a8e849b685332e6fb4b453bbc88da065fb1d682e6c39de1c89] <==
	E1013 13:55:54.569998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:54.570036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:54.570113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:54.570148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:54.570176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 13:55:54.570210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:54.570246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 13:55:54.569635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:54.571687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.412211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:55.434014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 13:55:55.466581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 13:55:55.489914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.548770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:55.605071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 13:55:55.677154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:55.682700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 13:55:55.710259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:55.717675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 13:55:55.763499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 13:55:55.780817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:55.877364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:55.895577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 13:55:55.926098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1013 13:55:58.161609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 14:14:38 addons-214022 kubelet[1511]: E1013 14:14:38.569998    1511 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 13 14:14:38 addons-214022 kubelet[1511]: E1013 14:14:38.570730    1511 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:14:38 addons-214022 kubelet[1511]: E1013 14:14:38.570924    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:14:41 addons-214022 kubelet[1511]: I1013 14:14:41.375527    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qdl2b" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:14:45 addons-214022 kubelet[1511]: I1013 14:14:45.376960    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:14:45 addons-214022 kubelet[1511]: E1013 14:14:45.377549    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:14:46 addons-214022 kubelet[1511]: E1013 14:14:46.380294    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/registry-creds-764b6fb674-rsjlm" podUID="3c1885cc-c9ac-48aa-bfe5-5873197f65f5"
	Oct 13 14:14:46 addons-214022 kubelet[1511]: I1013 14:14:46.524580    1511 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4hwd\" (UniqueName: \"kubernetes.io/projected/3c1885cc-c9ac-48aa-bfe5-5873197f65f5-kube-api-access-h4hwd\") pod \"3c1885cc-c9ac-48aa-bfe5-5873197f65f5\" (UID: \"3c1885cc-c9ac-48aa-bfe5-5873197f65f5\") "
	Oct 13 14:14:46 addons-214022 kubelet[1511]: I1013 14:14:46.530766    1511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c1885cc-c9ac-48aa-bfe5-5873197f65f5-kube-api-access-h4hwd" (OuterVolumeSpecName: "kube-api-access-h4hwd") pod "3c1885cc-c9ac-48aa-bfe5-5873197f65f5" (UID: "3c1885cc-c9ac-48aa-bfe5-5873197f65f5"). InnerVolumeSpecName "kube-api-access-h4hwd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 13 14:14:46 addons-214022 kubelet[1511]: I1013 14:14:46.626291    1511 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h4hwd\" (UniqueName: \"kubernetes.io/projected/3c1885cc-c9ac-48aa-bfe5-5873197f65f5-kube-api-access-h4hwd\") on node \"addons-214022\" DevicePath \"\""
	Oct 13 14:14:47 addons-214022 kubelet[1511]: I1013 14:14:47.378182    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:14:47 addons-214022 kubelet[1511]: E1013 14:14:47.379044    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:14:47 addons-214022 kubelet[1511]: I1013 14:14:47.533804    1511 reconciler_common.go:299] "Volume detached for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/3c1885cc-c9ac-48aa-bfe5-5873197f65f5-gcr-creds\") on node \"addons-214022\" DevicePath \"\""
	Oct 13 14:14:49 addons-214022 kubelet[1511]: I1013 14:14:49.378570    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c1885cc-c9ac-48aa-bfe5-5873197f65f5" path="/var/lib/kubelet/pods/3c1885cc-c9ac-48aa-bfe5-5873197f65f5/volumes"
	Oct 13 14:14:52 addons-214022 kubelet[1511]: E1013 14:14:52.377591    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:14:56 addons-214022 kubelet[1511]: E1013 14:14:56.584168    1511 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 13 14:14:56 addons-214022 kubelet[1511]: E1013 14:14:56.586044    1511 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 13 14:14:56 addons-214022 kubelet[1511]: E1013 14:14:56.586460    1511 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(bda8657d-2e14-4dc2-9e93-ecb85c37f5ed): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:14:56 addons-214022 kubelet[1511]: E1013 14:14:56.586595    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:14:59 addons-214022 kubelet[1511]: I1013 14:14:59.375563    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:14:59 addons-214022 kubelet[1511]: E1013 14:14:59.377018    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:15:06 addons-214022 kubelet[1511]: E1013 14:15:06.378105    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:15:09 addons-214022 kubelet[1511]: E1013 14:15:09.378242    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:15:10 addons-214022 kubelet[1511]: I1013 14:15:10.376125    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:15:10 addons-214022 kubelet[1511]: E1013 14:15:10.377199    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	
	
	==> storage-provisioner [61d2e3b41e535c2d6e45412739c6b7e475d5a6aef5eb620041ffb9e4f7f53d5d] <==
	W1013 14:14:52.255589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:54.259714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:54.270715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:56.275185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:56.285854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:58.290880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:14:58.296185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:00.302030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:00.313950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:02.318839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:02.324992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:04.329060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:04.338730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:06.343121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:06.349881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:08.352867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:08.357978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:10.362539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:10.369483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:12.373661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:12.382597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:14.386831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:14.392859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:16.399635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:15:16.407947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q: exit status 1 (97.347977ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214022/192.168.39.214
	Start Time:       Mon, 13 Oct 2025 14:11:31 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qhpgc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qhpgc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m47s                 default-scheduler  Successfully assigned default/nginx to addons-214022
	  Normal   Pulling    40s (x5 over 3m46s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     40s (x5 over 3m46s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     40s (x5 over 3m46s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    12s (x13 over 3m45s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12s (x13 over 3m45s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214022/192.168.39.214
	Start Time:       Mon, 13 Oct 2025 14:09:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cpq8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-cpq8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-214022
	  Normal   Pulling    3m5s (x5 over 6m3s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m5s (x5 over 6m2s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m5s (x5 over 6m2s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    61s (x21 over 6m2s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     61s (x21 over 6m2s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wxvk (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-8wxvk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rn6ng" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kvlpb" not found
	Error from server (NotFound): pods "registry-66898fdd98-qpt8q" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.026179458s)
--- FAIL: TestAddons/parallel/CSI (372.14s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-214022 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-214022 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-214022 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.404µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214022 -n addons-214022
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 logs -n 25: (1.406046126s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-130651 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                          │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ -o=json --download-only -p download-only-459703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                          │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ --download-only -p binary-mirror-039949 --alsologtostderr --binary-mirror http://127.0.0.1:46019 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ -p binary-mirror-039949                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ addons  │ enable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ start   │ -p addons-214022 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 14:02 UTC │
	│ addons  │ addons-214022 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ addons-214022 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ enable headlamp -p addons-214022 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ addons-214022 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:11 UTC │ 13 Oct 25 14:11 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 13:55:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 13:55:20.628679 1815551 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:55:20.628995 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629006 1815551 out.go:374] Setting ErrFile to fd 2...
	I1013 13:55:20.629013 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629212 1815551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 13:55:20.629832 1815551 out.go:368] Setting JSON to false
	I1013 13:55:20.630822 1815551 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":20269,"bootTime":1760343452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:55:20.630927 1815551 start.go:141] virtualization: kvm guest
	I1013 13:55:20.633155 1815551 out.go:179] * [addons-214022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 13:55:20.634757 1815551 notify.go:220] Checking for updates...
	I1013 13:55:20.634845 1815551 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 13:55:20.636374 1815551 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:55:20.637880 1815551 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:55:20.639342 1815551 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:20.640732 1815551 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 13:55:20.642003 1815551 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 13:55:20.643600 1815551 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:55:20.674859 1815551 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 13:55:20.676415 1815551 start.go:305] selected driver: kvm2
	I1013 13:55:20.676432 1815551 start.go:925] validating driver "kvm2" against <nil>
	I1013 13:55:20.676444 1815551 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 13:55:20.677121 1815551 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.677210 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.691866 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.691903 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.705734 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.705799 1815551 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 13:55:20.706090 1815551 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:55:20.706122 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:20.706178 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:20.706190 1815551 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 13:55:20.706245 1815551 start.go:349] cluster config:
	{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:20.706362 1815551 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.708302 1815551 out.go:179] * Starting "addons-214022" primary control-plane node in "addons-214022" cluster
	I1013 13:55:20.709605 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:20.709652 1815551 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 13:55:20.709667 1815551 cache.go:58] Caching tarball of preloaded images
	I1013 13:55:20.709799 1815551 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 13:55:20.709812 1815551 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 13:55:20.710191 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:20.710220 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json: {Name:mkc10ba1ef1459bd83ba3e9e0ba7c33fe1be6a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:20.710388 1815551 start.go:360] acquireMachinesLock for addons-214022: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 13:55:20.710457 1815551 start.go:364] duration metric: took 51.101µs to acquireMachinesLock for "addons-214022"
	I1013 13:55:20.710480 1815551 start.go:93] Provisioning new machine with config: &{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:55:20.710555 1815551 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 13:55:20.713031 1815551 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1013 13:55:20.713207 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:55:20.713262 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:55:20.727020 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I1013 13:55:20.727515 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:55:20.728150 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:55:20.728183 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:55:20.728607 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:55:20.728846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:20.729028 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:20.729259 1815551 start.go:159] libmachine.API.Create for "addons-214022" (driver="kvm2")
	I1013 13:55:20.729295 1815551 client.go:168] LocalClient.Create starting
	I1013 13:55:20.729337 1815551 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem
	I1013 13:55:20.759138 1815551 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem
	I1013 13:55:21.004098 1815551 main.go:141] libmachine: Running pre-create checks...
	I1013 13:55:21.004128 1815551 main.go:141] libmachine: (addons-214022) Calling .PreCreateCheck
	I1013 13:55:21.004821 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:21.005397 1815551 main.go:141] libmachine: Creating machine...
	I1013 13:55:21.005413 1815551 main.go:141] libmachine: (addons-214022) Calling .Create
	I1013 13:55:21.005675 1815551 main.go:141] libmachine: (addons-214022) creating domain...
	I1013 13:55:21.005726 1815551 main.go:141] libmachine: (addons-214022) creating network...
	I1013 13:55:21.007263 1815551 main.go:141] libmachine: (addons-214022) DBG | found existing default network
	I1013 13:55:21.007531 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.007563 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>default</name>
	I1013 13:55:21.007591 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 13:55:21.007612 1815551 main.go:141] libmachine: (addons-214022) DBG |   <forward mode='nat'>
	I1013 13:55:21.007625 1815551 main.go:141] libmachine: (addons-214022) DBG |     <nat>
	I1013 13:55:21.007636 1815551 main.go:141] libmachine: (addons-214022) DBG |       <port start='1024' end='65535'/>
	I1013 13:55:21.007652 1815551 main.go:141] libmachine: (addons-214022) DBG |     </nat>
	I1013 13:55:21.007667 1815551 main.go:141] libmachine: (addons-214022) DBG |   </forward>
	I1013 13:55:21.007675 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 13:55:21.007684 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 13:55:21.007690 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 13:55:21.007709 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.007733 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 13:55:21.007742 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.007750 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.007756 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.007766 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008295 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.008109 1815579 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045c0}
	I1013 13:55:21.008354 1815551 main.go:141] libmachine: (addons-214022) DBG | defining private network:
	I1013 13:55:21.008379 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008393 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.008408 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.008433 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.008451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.008458 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.008463 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.008471 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.008475 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.008480 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.008486 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.014811 1815551 main.go:141] libmachine: (addons-214022) DBG | creating private network mk-addons-214022 192.168.39.0/24...
	I1013 13:55:21.089953 1815551 main.go:141] libmachine: (addons-214022) DBG | private network mk-addons-214022 192.168.39.0/24 created
	I1013 13:55:21.090269 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.090299 1815551 main.go:141] libmachine: (addons-214022) setting up store path in /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.090308 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.090321 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>9289d330-dce4-4691-9e5d-0346b93e6814</uuid>
	I1013 13:55:21.090330 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1013 13:55:21.090340 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:03:10:f8'/>
	I1013 13:55:21.090351 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.090359 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.090366 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.090372 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.090379 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.090384 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.090402 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.090414 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.090424 1815551 main.go:141] libmachine: (addons-214022) building disk image from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 13:55:21.090432 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.090246 1815579 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.090457 1815551 main.go:141] libmachine: (addons-214022) Downloading /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 13:55:21.389435 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.389286 1815579 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa...
	I1013 13:55:21.573462 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573304 1815579 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk...
	I1013 13:55:21.573488 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing magic tar header
	I1013 13:55:21.573505 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing SSH key tar header
	I1013 13:55:21.573516 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573436 1815579 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.573528 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022
	I1013 13:55:21.573596 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 (perms=drwx------)
	I1013 13:55:21.573620 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines
	I1013 13:55:21.573632 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines (perms=drwxr-xr-x)
	I1013 13:55:21.573648 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube (perms=drwxr-xr-x)
	I1013 13:55:21.573659 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975 (perms=drwxrwxr-x)
	I1013 13:55:21.573667 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 13:55:21.573674 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 13:55:21.573684 1815551 main.go:141] libmachine: (addons-214022) defining domain...
	I1013 13:55:21.573701 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.573728 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975
	I1013 13:55:21.573769 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 13:55:21.573794 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins
	I1013 13:55:21.573812 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home
	I1013 13:55:21.573827 1815551 main.go:141] libmachine: (addons-214022) DBG | skipping /home - not owner
	I1013 13:55:21.574972 1815551 main.go:141] libmachine: (addons-214022) defining domain using XML: 
	I1013 13:55:21.574985 1815551 main.go:141] libmachine: (addons-214022) <domain type='kvm'>
	I1013 13:55:21.574990 1815551 main.go:141] libmachine: (addons-214022)   <name>addons-214022</name>
	I1013 13:55:21.575002 1815551 main.go:141] libmachine: (addons-214022)   <memory unit='MiB'>4096</memory>
	I1013 13:55:21.575009 1815551 main.go:141] libmachine: (addons-214022)   <vcpu>2</vcpu>
	I1013 13:55:21.575015 1815551 main.go:141] libmachine: (addons-214022)   <features>
	I1013 13:55:21.575023 1815551 main.go:141] libmachine: (addons-214022)     <acpi/>
	I1013 13:55:21.575032 1815551 main.go:141] libmachine: (addons-214022)     <apic/>
	I1013 13:55:21.575059 1815551 main.go:141] libmachine: (addons-214022)     <pae/>
	I1013 13:55:21.575077 1815551 main.go:141] libmachine: (addons-214022)   </features>
	I1013 13:55:21.575100 1815551 main.go:141] libmachine: (addons-214022)   <cpu mode='host-passthrough'>
	I1013 13:55:21.575110 1815551 main.go:141] libmachine: (addons-214022)   </cpu>
	I1013 13:55:21.575122 1815551 main.go:141] libmachine: (addons-214022)   <os>
	I1013 13:55:21.575132 1815551 main.go:141] libmachine: (addons-214022)     <type>hvm</type>
	I1013 13:55:21.575141 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='cdrom'/>
	I1013 13:55:21.575151 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='hd'/>
	I1013 13:55:21.575162 1815551 main.go:141] libmachine: (addons-214022)     <bootmenu enable='no'/>
	I1013 13:55:21.575179 1815551 main.go:141] libmachine: (addons-214022)   </os>
	I1013 13:55:21.575186 1815551 main.go:141] libmachine: (addons-214022)   <devices>
	I1013 13:55:21.575192 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='cdrom'>
	I1013 13:55:21.575201 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.575208 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.575216 1815551 main.go:141] libmachine: (addons-214022)       <readonly/>
	I1013 13:55:21.575224 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575234 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='disk'>
	I1013 13:55:21.575251 1815551 main.go:141] libmachine: (addons-214022)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 13:55:21.575272 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.575286 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.575296 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575307 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575317 1815551 main.go:141] libmachine: (addons-214022)       <source network='mk-addons-214022'/>
	I1013 13:55:21.575329 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575339 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575356 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575374 1815551 main.go:141] libmachine: (addons-214022)       <source network='default'/>
	I1013 13:55:21.575392 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575408 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575416 1815551 main.go:141] libmachine: (addons-214022)     <serial type='pty'>
	I1013 13:55:21.575422 1815551 main.go:141] libmachine: (addons-214022)       <target port='0'/>
	I1013 13:55:21.575433 1815551 main.go:141] libmachine: (addons-214022)     </serial>
	I1013 13:55:21.575443 1815551 main.go:141] libmachine: (addons-214022)     <console type='pty'>
	I1013 13:55:21.575453 1815551 main.go:141] libmachine: (addons-214022)       <target type='serial' port='0'/>
	I1013 13:55:21.575463 1815551 main.go:141] libmachine: (addons-214022)     </console>
	I1013 13:55:21.575475 1815551 main.go:141] libmachine: (addons-214022)     <rng model='virtio'>
	I1013 13:55:21.575486 1815551 main.go:141] libmachine: (addons-214022)       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.575495 1815551 main.go:141] libmachine: (addons-214022)     </rng>
	I1013 13:55:21.575507 1815551 main.go:141] libmachine: (addons-214022)   </devices>
	I1013 13:55:21.575519 1815551 main.go:141] libmachine: (addons-214022) </domain>
	I1013 13:55:21.575530 1815551 main.go:141] libmachine: (addons-214022) 
	I1013 13:55:21.580981 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:54:97:7f in network default
	I1013 13:55:21.581682 1815551 main.go:141] libmachine: (addons-214022) starting domain...
	I1013 13:55:21.581698 1815551 main.go:141] libmachine: (addons-214022) ensuring networks are active...
	I1013 13:55:21.581746 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:21.582514 1815551 main.go:141] libmachine: (addons-214022) Ensuring network default is active
	I1013 13:55:21.583076 1815551 main.go:141] libmachine: (addons-214022) Ensuring network mk-addons-214022 is active
	I1013 13:55:21.583880 1815551 main.go:141] libmachine: (addons-214022) getting domain XML...
	I1013 13:55:21.585201 1815551 main.go:141] libmachine: (addons-214022) DBG | starting domain XML:
	I1013 13:55:21.585220 1815551 main.go:141] libmachine: (addons-214022) DBG | <domain type='kvm'>
	I1013 13:55:21.585231 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>addons-214022</name>
	I1013 13:55:21.585241 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c368161c-2753-46d2-a9ea-3f8a7f4ac862</uuid>
	I1013 13:55:21.585272 1815551 main.go:141] libmachine: (addons-214022) DBG |   <memory unit='KiB'>4194304</memory>
	I1013 13:55:21.585285 1815551 main.go:141] libmachine: (addons-214022) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1013 13:55:21.585295 1815551 main.go:141] libmachine: (addons-214022) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 13:55:21.585304 1815551 main.go:141] libmachine: (addons-214022) DBG |   <os>
	I1013 13:55:21.585317 1815551 main.go:141] libmachine: (addons-214022) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 13:55:21.585324 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='cdrom'/>
	I1013 13:55:21.585329 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='hd'/>
	I1013 13:55:21.585345 1815551 main.go:141] libmachine: (addons-214022) DBG |     <bootmenu enable='no'/>
	I1013 13:55:21.585358 1815551 main.go:141] libmachine: (addons-214022) DBG |   </os>
	I1013 13:55:21.585369 1815551 main.go:141] libmachine: (addons-214022) DBG |   <features>
	I1013 13:55:21.585391 1815551 main.go:141] libmachine: (addons-214022) DBG |     <acpi/>
	I1013 13:55:21.585403 1815551 main.go:141] libmachine: (addons-214022) DBG |     <apic/>
	I1013 13:55:21.585411 1815551 main.go:141] libmachine: (addons-214022) DBG |     <pae/>
	I1013 13:55:21.585428 1815551 main.go:141] libmachine: (addons-214022) DBG |   </features>
	I1013 13:55:21.585439 1815551 main.go:141] libmachine: (addons-214022) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 13:55:21.585443 1815551 main.go:141] libmachine: (addons-214022) DBG |   <clock offset='utc'/>
	I1013 13:55:21.585451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 13:55:21.585456 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_reboot>restart</on_reboot>
	I1013 13:55:21.585464 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_crash>destroy</on_crash>
	I1013 13:55:21.585467 1815551 main.go:141] libmachine: (addons-214022) DBG |   <devices>
	I1013 13:55:21.585476 1815551 main.go:141] libmachine: (addons-214022) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 13:55:21.585483 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='cdrom'>
	I1013 13:55:21.585490 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw'/>
	I1013 13:55:21.585499 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.585530 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.585553 1815551 main.go:141] libmachine: (addons-214022) DBG |       <readonly/>
	I1013 13:55:21.585566 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 13:55:21.585582 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585595 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='disk'>
	I1013 13:55:21.585608 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 13:55:21.585626 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.585638 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.585652 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 13:55:21.585666 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585680 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 13:55:21.585693 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 13:55:21.585706 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585726 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 13:55:21.585741 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 13:55:21.585760 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 13:55:21.585769 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585773 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585778 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:45:c6:7b'/>
	I1013 13:55:21.585783 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='mk-addons-214022'/>
	I1013 13:55:21.585787 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585793 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 13:55:21.585797 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585801 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585806 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:54:97:7f'/>
	I1013 13:55:21.585810 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='default'/>
	I1013 13:55:21.585815 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585823 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 13:55:21.585828 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585834 1815551 main.go:141] libmachine: (addons-214022) DBG |     <serial type='pty'>
	I1013 13:55:21.585840 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='isa-serial' port='0'>
	I1013 13:55:21.585849 1815551 main.go:141] libmachine: (addons-214022) DBG |         <model name='isa-serial'/>
	I1013 13:55:21.585856 1815551 main.go:141] libmachine: (addons-214022) DBG |       </target>
	I1013 13:55:21.585860 1815551 main.go:141] libmachine: (addons-214022) DBG |     </serial>
	I1013 13:55:21.585867 1815551 main.go:141] libmachine: (addons-214022) DBG |     <console type='pty'>
	I1013 13:55:21.585871 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='serial' port='0'/>
	I1013 13:55:21.585878 1815551 main.go:141] libmachine: (addons-214022) DBG |     </console>
	I1013 13:55:21.585882 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='mouse' bus='ps2'/>
	I1013 13:55:21.585888 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 13:55:21.585895 1815551 main.go:141] libmachine: (addons-214022) DBG |     <audio id='1' type='none'/>
	I1013 13:55:21.585900 1815551 main.go:141] libmachine: (addons-214022) DBG |     <memballoon model='virtio'>
	I1013 13:55:21.585905 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 13:55:21.585912 1815551 main.go:141] libmachine: (addons-214022) DBG |     </memballoon>
	I1013 13:55:21.585920 1815551 main.go:141] libmachine: (addons-214022) DBG |     <rng model='virtio'>
	I1013 13:55:21.585937 1815551 main.go:141] libmachine: (addons-214022) DBG |       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.585942 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 13:55:21.585947 1815551 main.go:141] libmachine: (addons-214022) DBG |     </rng>
	I1013 13:55:21.585950 1815551 main.go:141] libmachine: (addons-214022) DBG |   </devices>
	I1013 13:55:21.585955 1815551 main.go:141] libmachine: (addons-214022) DBG | </domain>
	I1013 13:55:21.585958 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.998506 1815551 main.go:141] libmachine: (addons-214022) waiting for domain to start...
	I1013 13:55:21.999992 1815551 main.go:141] libmachine: (addons-214022) domain is now running
	I1013 13:55:22.000011 1815551 main.go:141] libmachine: (addons-214022) waiting for IP...
	I1013 13:55:22.000803 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.001255 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.001289 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.001544 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.001627 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.001556 1815579 retry.go:31] will retry after 233.588452ms: waiting for domain to come up
	I1013 13:55:22.236968 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.237478 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.237508 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.237876 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.237928 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.237848 1815579 retry.go:31] will retry after 300.8157ms: waiting for domain to come up
	I1013 13:55:22.540639 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.541271 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.541302 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.541621 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.541683 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.541605 1815579 retry.go:31] will retry after 377.651668ms: waiting for domain to come up
	I1013 13:55:22.921184 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.921783 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.921814 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.922148 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.922237 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.922151 1815579 retry.go:31] will retry after 510.251488ms: waiting for domain to come up
	I1013 13:55:23.433846 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:23.434356 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:23.434384 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:23.434622 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:23.434651 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:23.434592 1815579 retry.go:31] will retry after 738.765721ms: waiting for domain to come up
	I1013 13:55:24.174730 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:24.175220 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:24.175261 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:24.175609 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:24.175645 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:24.175615 1815579 retry.go:31] will retry after 941.377797ms: waiting for domain to come up
	I1013 13:55:25.118416 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.119134 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.119161 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.119505 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.119531 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.119464 1815579 retry.go:31] will retry after 715.698221ms: waiting for domain to come up
	I1013 13:55:25.837169 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.837602 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.837632 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.837919 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.837956 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.837912 1815579 retry.go:31] will retry after 1.477632519s: waiting for domain to come up
	I1013 13:55:27.317869 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:27.318416 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:27.318445 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:27.318730 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:27.318828 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:27.318742 1815579 retry.go:31] will retry after 1.752025896s: waiting for domain to come up
	I1013 13:55:29.072255 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:29.072804 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:29.072827 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:29.073152 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:29.073218 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:29.073146 1815579 retry.go:31] will retry after 1.890403935s: waiting for domain to come up
	I1013 13:55:30.965205 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:30.965861 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:30.965889 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:30.966181 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:30.966249 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:30.966169 1815579 retry.go:31] will retry after 2.015469115s: waiting for domain to come up
	I1013 13:55:32.984641 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:32.985205 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:32.985254 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:32.985538 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:32.985566 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:32.985483 1815579 retry.go:31] will retry after 3.162648802s: waiting for domain to come up
	I1013 13:55:36.149428 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150058 1815551 main.go:141] libmachine: (addons-214022) found domain IP: 192.168.39.214
	I1013 13:55:36.150084 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has current primary IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150093 1815551 main.go:141] libmachine: (addons-214022) reserving static IP address...
	I1013 13:55:36.150509 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find host DHCP lease matching {name: "addons-214022", mac: "52:54:00:45:c6:7b", ip: "192.168.39.214"} in network mk-addons-214022
	I1013 13:55:36.359631 1815551 main.go:141] libmachine: (addons-214022) DBG | Getting to WaitForSSH function...
	I1013 13:55:36.359656 1815551 main.go:141] libmachine: (addons-214022) reserved static IP address 192.168.39.214 for domain addons-214022
	I1013 13:55:36.359708 1815551 main.go:141] libmachine: (addons-214022) waiting for SSH...
	I1013 13:55:36.362970 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363545 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.363578 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363975 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH client type: external
	I1013 13:55:36.364005 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa (-rw-------)
	I1013 13:55:36.364071 1815551 main.go:141] libmachine: (addons-214022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 13:55:36.364096 1815551 main.go:141] libmachine: (addons-214022) DBG | About to run SSH command:
	I1013 13:55:36.364112 1815551 main.go:141] libmachine: (addons-214022) DBG | exit 0
	I1013 13:55:36.500938 1815551 main.go:141] libmachine: (addons-214022) DBG | SSH cmd err, output: <nil>: 
	I1013 13:55:36.501251 1815551 main.go:141] libmachine: (addons-214022) domain creation complete
	I1013 13:55:36.501689 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:36.502339 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502549 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502694 1815551 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1013 13:55:36.502705 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:55:36.504172 1815551 main.go:141] libmachine: Detecting operating system of created instance...
	I1013 13:55:36.504188 1815551 main.go:141] libmachine: Waiting for SSH to be available...
	I1013 13:55:36.504195 1815551 main.go:141] libmachine: Getting to WaitForSSH function...
	I1013 13:55:36.504201 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.507156 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507596 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.507626 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507811 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.508003 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508123 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508334 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.508503 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.508771 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.508786 1815551 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1013 13:55:36.609679 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.609706 1815551 main.go:141] libmachine: Detecting the provisioner...
	I1013 13:55:36.609725 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.612870 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613343 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.613380 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613602 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.613846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614017 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614155 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.614343 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.614556 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.614568 1815551 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1013 13:55:36.717397 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1013 13:55:36.717477 1815551 main.go:141] libmachine: found compatible host: buildroot
	I1013 13:55:36.717487 1815551 main.go:141] libmachine: Provisioning with buildroot...
	I1013 13:55:36.717495 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.717788 1815551 buildroot.go:166] provisioning hostname "addons-214022"
	I1013 13:55:36.717829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.718042 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.721497 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.721988 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.722027 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.722260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.722429 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722542 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722660 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.722864 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.723104 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.723120 1815551 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214022 && echo "addons-214022" | sudo tee /etc/hostname
	I1013 13:55:36.853529 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214022
	
	I1013 13:55:36.853563 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.856617 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857071 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.857100 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.857522 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857852 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.858072 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.858351 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.858378 1815551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214022/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 13:55:36.978438 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.978492 1815551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 13:55:36.978561 1815551 buildroot.go:174] setting up certificates
	I1013 13:55:36.978581 1815551 provision.go:84] configureAuth start
	I1013 13:55:36.978601 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.978932 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:36.982111 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982557 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.982587 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982769 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.985722 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986132 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.986153 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986337 1815551 provision.go:143] copyHostCerts
	I1013 13:55:36.986421 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 13:55:36.986610 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 13:55:36.986700 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 13:55:36.986789 1815551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.addons-214022 san=[127.0.0.1 192.168.39.214 addons-214022 localhost minikube]
	I1013 13:55:37.044634 1815551 provision.go:177] copyRemoteCerts
	I1013 13:55:37.044706 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 13:55:37.044750 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.047881 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048238 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.048268 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048531 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.048757 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.048938 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.049093 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.132357 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 13:55:37.163230 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 13:55:37.193519 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 13:55:37.228041 1815551 provision.go:87] duration metric: took 249.44117ms to configureAuth
	I1013 13:55:37.228073 1815551 buildroot.go:189] setting minikube options for container-runtime
	I1013 13:55:37.228284 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:55:37.228308 1815551 main.go:141] libmachine: Checking connection to Docker...
	I1013 13:55:37.228319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetURL
	I1013 13:55:37.229621 1815551 main.go:141] libmachine: (addons-214022) DBG | using libvirt version 8000000
	I1013 13:55:37.231977 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232573 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.232594 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232944 1815551 main.go:141] libmachine: Docker is up and running!
	I1013 13:55:37.232959 1815551 main.go:141] libmachine: Reticulating splines...
	I1013 13:55:37.232967 1815551 client.go:171] duration metric: took 16.503662992s to LocalClient.Create
	I1013 13:55:37.232989 1815551 start.go:167] duration metric: took 16.503732898s to libmachine.API.Create "addons-214022"
	I1013 13:55:37.232996 1815551 start.go:293] postStartSetup for "addons-214022" (driver="kvm2")
	I1013 13:55:37.233004 1815551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 13:55:37.233019 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.233334 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 13:55:37.233364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.236079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236495 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.236524 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.237136 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.237319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.237840 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.320344 1815551 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 13:55:37.325903 1815551 info.go:137] Remote host: Buildroot 2025.02
	I1013 13:55:37.325945 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 13:55:37.326098 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 13:55:37.326125 1815551 start.go:296] duration metric: took 93.124024ms for postStartSetup
	I1013 13:55:37.326165 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:37.326907 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.329757 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330258 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.330288 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330612 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:37.330830 1815551 start.go:128] duration metric: took 16.620261949s to createHost
	I1013 13:55:37.330855 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.334094 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334644 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.334674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334903 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.335118 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335505 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.335738 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:37.336080 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:37.336099 1815551 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 13:55:37.453534 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760363737.403582342
	
	I1013 13:55:37.453567 1815551 fix.go:216] guest clock: 1760363737.403582342
	I1013 13:55:37.453576 1815551 fix.go:229] Guest: 2025-10-13 13:55:37.403582342 +0000 UTC Remote: 2025-10-13 13:55:37.33084379 +0000 UTC m=+16.741419072 (delta=72.738552ms)
	I1013 13:55:37.453601 1815551 fix.go:200] guest clock delta is within tolerance: 72.738552ms
	I1013 13:55:37.453614 1815551 start.go:83] releasing machines lock for "addons-214022", held for 16.74313679s
	I1013 13:55:37.453644 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.453996 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.457079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457464 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.457495 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457681 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458199 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458359 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458457 1815551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 13:55:37.458521 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.458571 1815551 ssh_runner.go:195] Run: cat /version.json
	I1013 13:55:37.458594 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.461592 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462001 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462030 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462059 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462230 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.462419 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.462580 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.462613 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462638 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462750 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.462894 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.463074 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.463216 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.463355 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.568362 1815551 ssh_runner.go:195] Run: systemctl --version
	I1013 13:55:37.574961 1815551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 13:55:37.581570 1815551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 13:55:37.581652 1815551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 13:55:37.601744 1815551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 13:55:37.601771 1815551 start.go:495] detecting cgroup driver to use...
	I1013 13:55:37.601855 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 13:55:37.636399 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 13:55:37.653284 1815551 docker.go:218] disabling cri-docker service (if available) ...
	I1013 13:55:37.653349 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 13:55:37.671035 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 13:55:37.687997 1815551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 13:55:37.835046 1815551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 13:55:38.036660 1815551 docker.go:234] disabling docker service ...
	I1013 13:55:38.036785 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 13:55:38.054634 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 13:55:38.070992 1815551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 13:55:38.226219 1815551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 13:55:38.375133 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 13:55:38.391629 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 13:55:38.415622 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 13:55:38.428382 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 13:55:38.441166 1815551 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 13:55:38.441271 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 13:55:38.454185 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.467219 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 13:55:38.480016 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.493623 1815551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 13:55:38.507533 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 13:55:38.520643 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 13:55:38.533755 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 13:55:38.546971 1815551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 13:55:38.557881 1815551 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 13:55:38.557958 1815551 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 13:55:38.578224 1815551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 13:55:38.590424 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:38.732726 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:38.770576 1815551 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 13:55:38.770707 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:38.776353 1815551 retry.go:31] will retry after 1.261164496s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 13:55:40.038886 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:40.045830 1815551 start.go:563] Will wait 60s for crictl version
	I1013 13:55:40.045914 1815551 ssh_runner.go:195] Run: which crictl
	I1013 13:55:40.050941 1815551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 13:55:40.093318 1815551 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 13:55:40.093432 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.123924 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.255787 1815551 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 13:55:40.331568 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:40.334892 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335313 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:40.335337 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335632 1815551 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 13:55:40.341286 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:40.357723 1815551 kubeadm.go:883] updating cluster {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 13:55:40.357874 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:40.357947 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:40.395630 1815551 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 13:55:40.395736 1815551 ssh_runner.go:195] Run: which lz4
	I1013 13:55:40.400778 1815551 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 13:55:40.406306 1815551 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 13:55:40.406344 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (409015552 bytes)
	I1013 13:55:41.943253 1815551 containerd.go:563] duration metric: took 1.54249613s to copy over tarball
	I1013 13:55:41.943351 1815551 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 13:55:43.492564 1815551 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.549175583s)
	I1013 13:55:43.492596 1815551 containerd.go:570] duration metric: took 1.549300388s to extract the tarball
	I1013 13:55:43.492604 1815551 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 13:55:43.534655 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:43.680421 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:43.727538 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.770225 1815551 retry.go:31] will retry after 129.297012ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T13:55:43Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1013 13:55:43.900675 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.942782 1815551 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 13:55:43.942818 1815551 cache_images.go:85] Images are preloaded, skipping loading
	I1013 13:55:43.942831 1815551 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.34.1 containerd true true} ...
	I1013 13:55:43.942973 1815551 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 13:55:43.943036 1815551 ssh_runner.go:195] Run: sudo crictl info
	I1013 13:55:43.983500 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:43.983527 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:43.983547 1815551 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 13:55:43.983572 1815551 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214022 NodeName:addons-214022 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 13:55:43.983683 1815551 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-214022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.214"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 13:55:43.983786 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 13:55:43.997492 1815551 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 13:55:43.997569 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 13:55:44.009940 1815551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1013 13:55:44.032456 1815551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 13:55:44.055201 1815551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1013 13:55:44.077991 1815551 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I1013 13:55:44.082755 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:44.102001 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:44.250454 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:55:44.271759 1815551 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022 for IP: 192.168.39.214
	I1013 13:55:44.271804 1815551 certs.go:195] generating shared ca certs ...
	I1013 13:55:44.271849 1815551 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.272058 1815551 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 13:55:44.515410 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt ...
	I1013 13:55:44.515443 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt: {Name:mk7e93844bf7a5315c584d29c143e2135009c9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515626 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key ...
	I1013 13:55:44.515639 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key: {Name:mk2370dd9470838be70f5ff73870ee78eaf49615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515736 1815551 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 13:55:44.688770 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt ...
	I1013 13:55:44.688804 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt: {Name:mk17069980c2ce75e576b93cf8d09a188d68e9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.688989 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key ...
	I1013 13:55:44.689002 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key: {Name:mk6b5175fc3e29304600d26ae322daa345a1af96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.689075 1815551 certs.go:257] generating profile certs ...
	I1013 13:55:44.689137 1815551 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key
	I1013 13:55:44.689163 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt with IP's: []
	I1013 13:55:45.249037 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt ...
	I1013 13:55:45.249073 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: {Name:mk280462c7f89663f3ca7afb3f0492dd2b0ee285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249251 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key ...
	I1013 13:55:45.249263 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key: {Name:mk559b21297b9d07a442f449010608571723a06a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249350 1815551 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114
	I1013 13:55:45.249370 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I1013 13:55:45.485539 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 ...
	I1013 13:55:45.485568 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114: {Name:mkd1f4b4fe453f9f52532a7d0522a77f6292f9c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485740 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 ...
	I1013 13:55:45.485755 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114: {Name:mk7e630cb0d73800acc236df973e9041d684cea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485833 1815551 certs.go:382] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt
	I1013 13:55:45.485922 1815551 certs.go:386] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key
	I1013 13:55:45.485980 1815551 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key
	I1013 13:55:45.485998 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt with IP's: []
	I1013 13:55:45.781914 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt ...
	I1013 13:55:45.781958 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt: {Name:mk2c046b91ab288417107efe4a8ee37eb796f0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782135 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key ...
	I1013 13:55:45.782151 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key: {Name:mk11ba110c07b71583dc1e7a37e3c7830733bcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782356 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 13:55:45.782394 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 13:55:45.782417 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 13:55:45.782439 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 13:55:45.783086 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 13:55:45.815352 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 13:55:45.846541 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 13:55:45.880232 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 13:55:45.924466 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 13:55:45.962160 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 13:55:45.999510 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 13:55:46.034971 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 13:55:46.068482 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 13:55:46.099803 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 13:55:46.121270 1815551 ssh_runner.go:195] Run: openssl version
	I1013 13:55:46.128266 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 13:55:46.142449 1815551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148226 1815551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148313 1815551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.155940 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 13:55:46.170023 1815551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 13:55:46.175480 1815551 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 13:55:46.175554 1815551 kubeadm.go:400] StartCluster: {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:46.175652 1815551 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 13:55:46.175759 1815551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 13:55:46.214377 1815551 cri.go:89] found id: ""
	I1013 13:55:46.214475 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 13:55:46.227534 1815551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 13:55:46.239809 1815551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 13:55:46.253443 1815551 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 13:55:46.253466 1815551 kubeadm.go:157] found existing configuration files:
	
	I1013 13:55:46.253514 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 13:55:46.265630 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 13:55:46.265706 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 13:55:46.278450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 13:55:46.290243 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 13:55:46.290303 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 13:55:46.303207 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.315748 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 13:55:46.315819 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.328450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 13:55:46.340422 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 13:55:46.340491 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 13:55:46.353088 1815551 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 13:55:46.409861 1815551 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 13:55:46.409939 1815551 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 13:55:46.510451 1815551 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 13:55:46.510548 1815551 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 13:55:46.510736 1815551 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 13:55:46.519844 1815551 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 13:55:46.532700 1815551 out.go:252]   - Generating certificates and keys ...
	I1013 13:55:46.532819 1815551 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 13:55:46.532896 1815551 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 13:55:46.783435 1815551 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 13:55:47.020350 1815551 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 13:55:47.775782 1815551 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 13:55:48.011804 1815551 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 13:55:48.461103 1815551 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 13:55:48.461301 1815551 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.750774 1815551 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 13:55:48.751132 1815551 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.831944 1815551 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 13:55:49.085300 1815551 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 13:55:49.215416 1815551 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 13:55:49.215485 1815551 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 13:55:49.341619 1815551 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 13:55:49.552784 1815551 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 13:55:49.595942 1815551 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 13:55:49.670226 1815551 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 13:55:49.887570 1815551 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 13:55:49.888048 1815551 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 13:55:49.890217 1815551 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 13:55:49.891956 1815551 out.go:252]   - Booting up control plane ...
	I1013 13:55:49.892075 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 13:55:49.892175 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 13:55:49.892283 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 13:55:49.915573 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 13:55:49.915702 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 13:55:49.926506 1815551 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 13:55:49.926635 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 13:55:49.926699 1815551 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 13:55:50.104649 1815551 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 13:55:50.104896 1815551 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 13:55:51.105517 1815551 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001950535s
	I1013 13:55:51.110678 1815551 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 13:55:51.110781 1815551 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.214:8443/livez
	I1013 13:55:51.110862 1815551 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 13:55:51.110934 1815551 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 13:55:53.698826 1815551 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.589717518s
	I1013 13:55:54.571486 1815551 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.462849107s
	I1013 13:55:56.609645 1815551 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502421023s
	I1013 13:55:56.625086 1815551 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 13:55:56.642185 1815551 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 13:55:56.660063 1815551 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 13:55:56.660353 1815551 kubeadm.go:318] [mark-control-plane] Marking the node addons-214022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 13:55:56.677664 1815551 kubeadm.go:318] [bootstrap-token] Using token: yho7iw.8cmp1omdihpr13ia
	I1013 13:55:56.680503 1815551 out.go:252]   - Configuring RBAC rules ...
	I1013 13:55:56.680644 1815551 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 13:55:56.691921 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 13:55:56.701832 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 13:55:56.706581 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 13:55:56.711586 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 13:55:56.720960 1815551 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 13:55:57.019012 1815551 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 13:55:57.510749 1815551 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 13:55:58.017894 1815551 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 13:55:58.019641 1815551 kubeadm.go:318] 
	I1013 13:55:58.019746 1815551 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 13:55:58.019759 1815551 kubeadm.go:318] 
	I1013 13:55:58.019856 1815551 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 13:55:58.019866 1815551 kubeadm.go:318] 
	I1013 13:55:58.019906 1815551 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 13:55:58.019991 1815551 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 13:55:58.020075 1815551 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 13:55:58.020087 1815551 kubeadm.go:318] 
	I1013 13:55:58.020135 1815551 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 13:55:58.020180 1815551 kubeadm.go:318] 
	I1013 13:55:58.020272 1815551 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 13:55:58.020284 1815551 kubeadm.go:318] 
	I1013 13:55:58.020355 1815551 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 13:55:58.020465 1815551 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 13:55:58.020560 1815551 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 13:55:58.020570 1815551 kubeadm.go:318] 
	I1013 13:55:58.020696 1815551 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 13:55:58.020841 1815551 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 13:55:58.020863 1815551 kubeadm.go:318] 
	I1013 13:55:58.021022 1815551 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021178 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa \
	I1013 13:55:58.021220 1815551 kubeadm.go:318] 	--control-plane 
	I1013 13:55:58.021227 1815551 kubeadm.go:318] 
	I1013 13:55:58.021356 1815551 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 13:55:58.021366 1815551 kubeadm.go:318] 
	I1013 13:55:58.021480 1815551 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021613 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa 
	I1013 13:55:58.023899 1815551 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 13:55:58.023930 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:58.023940 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:58.026381 1815551 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 13:55:58.028311 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 13:55:58.043778 1815551 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 13:55:58.076261 1815551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 13:55:58.076355 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.076389 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214022 minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=addons-214022 minikube.k8s.io/primary=true
	I1013 13:55:58.125421 1815551 ops.go:34] apiserver oom_adj: -16
	I1013 13:55:58.249972 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.750645 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.250461 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.750623 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.250758 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.750903 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.250112 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.750238 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.250999 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.377634 1815551 kubeadm.go:1113] duration metric: took 4.301363742s to wait for elevateKubeSystemPrivileges
	I1013 13:56:02.377670 1815551 kubeadm.go:402] duration metric: took 16.202122758s to StartCluster
	I1013 13:56:02.377691 1815551 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.377852 1815551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:56:02.378374 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.378641 1815551 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:56:02.378701 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 13:56:02.378727 1815551 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 13:56:02.378856 1815551 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214022"
	I1013 13:56:02.378871 1815551 addons.go:69] Setting yakd=true in profile "addons-214022"
	I1013 13:56:02.378888 1815551 addons.go:238] Setting addon yakd=true in "addons-214022"
	I1013 13:56:02.378915 1815551 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:02.378924 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378926 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.378954 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378945 1815551 addons.go:69] Setting default-storageclass=true in profile "addons-214022"
	I1013 13:56:02.378942 1815551 addons.go:69] Setting gcp-auth=true in profile "addons-214022"
	I1013 13:56:02.378975 1815551 addons.go:69] Setting cloud-spanner=true in profile "addons-214022"
	I1013 13:56:02.378978 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214022"
	I1013 13:56:02.378963 1815551 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.378988 1815551 mustload.go:65] Loading cluster: addons-214022
	I1013 13:56:02.378999 1815551 addons.go:69] Setting registry=true in profile "addons-214022"
	I1013 13:56:02.379046 1815551 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214022"
	I1013 13:56:02.379058 1815551 addons.go:238] Setting addon registry=true in "addons-214022"
	I1013 13:56:02.379079 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379103 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379214 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.379427 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.378987 1815551 addons.go:238] Setting addon cloud-spanner=true in "addons-214022"
	I1013 13:56:02.379425 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379478 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379483 1815551 addons.go:69] Setting storage-provisioner=true in profile "addons-214022"
	I1013 13:56:02.379488 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379497 1815551 addons.go:238] Setting addon storage-provisioner=true in "addons-214022"
	I1013 13:56:02.379503 1815551 addons.go:69] Setting ingress=true in profile "addons-214022"
	I1013 13:56:02.379519 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379522 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379532 1815551 addons.go:69] Setting ingress-dns=true in profile "addons-214022"
	I1013 13:56:02.379546 1815551 addons.go:238] Setting addon ingress-dns=true in "addons-214022"
	I1013 13:56:02.379575 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379616 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379653 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379682 1815551 addons.go:69] Setting volumesnapshots=true in profile "addons-214022"
	I1013 13:56:02.379814 1815551 addons.go:238] Setting addon volumesnapshots=true in "addons-214022"
	I1013 13:56:02.379879 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379926 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379490 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379965 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379979 1815551 addons.go:69] Setting metrics-server=true in profile "addons-214022"
	I1013 13:56:02.379992 1815551 addons.go:238] Setting addon metrics-server=true in "addons-214022"
	I1013 13:56:02.380013 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379520 1815551 addons.go:238] Setting addon ingress=true in "addons-214022"
	I1013 13:56:02.379924 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380064 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380076 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380107 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380112 1815551 addons.go:69] Setting inspektor-gadget=true in profile "addons-214022"
	I1013 13:56:02.380125 1815551 addons.go:238] Setting addon inspektor-gadget=true in "addons-214022"
	I1013 13:56:02.380158 1815551 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.380221 1815551 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214022"
	I1013 13:56:02.380272 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380445 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380510 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379699 1815551 addons.go:69] Setting volcano=true in profile "addons-214022"
	I1013 13:56:02.380559 1815551 addons.go:238] Setting addon volcano=true in "addons-214022"
	I1013 13:56:02.380613 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380634 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380666 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380790 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380832 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380876 1815551 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214022"
	I1013 13:56:02.380894 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214022"
	I1013 13:56:02.379472 1815551 addons.go:69] Setting registry-creds=true in profile "addons-214022"
	I1013 13:56:02.381003 1815551 addons.go:238] Setting addon registry-creds=true in "addons-214022"
	I1013 13:56:02.381112 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.381265 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.381293 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.381341 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.382020 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.382057 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.382817 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.383259 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.383291 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384195 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384256 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384286 1815551 out.go:179] * Verifying Kubernetes components...
	I1013 13:56:02.384291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.384732 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384782 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.387093 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:56:02.392106 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.392163 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.396083 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.396162 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.410131 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I1013 13:56:02.411431 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1013 13:56:02.412218 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.412918 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.412942 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.413748 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.414498 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.415229 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.415286 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.415822 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.415843 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.420030 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I1013 13:56:02.420041 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I1013 13:56:02.420259 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I1013 13:56:02.420298 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I1013 13:56:02.420346 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.420406 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I1013 13:56:02.420930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421041 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421071 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.421170 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421581 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421600 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421753 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421769 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421819 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421832 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.422190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422264 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422931 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.422976 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.423789 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.424161 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.424211 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.427224 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I1013 13:56:02.427390 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38041
	I1013 13:56:02.427782 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.427837 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428131 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.428460 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.428533 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.428569 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428840 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.429601 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429621 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.429774 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429786 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.430349 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430508 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.430777 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430880 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431609 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.431937 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.431967 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431989 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.432062 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432169 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432395 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.432603 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.432771 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.433653 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.433706 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.433998 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.434042 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.434547 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32821
	I1013 13:56:02.441970 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1013 13:56:02.442071 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I1013 13:56:02.442458 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.442810 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.443536 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443557 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.443796 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443813 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.444423 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.444487 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.445199 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.445303 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.445921 1815551 addons.go:238] Setting addon default-storageclass=true in "addons-214022"
	I1013 13:56:02.445974 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.446387 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.446430 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.447853 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1013 13:56:02.447930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448413 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448652 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.448673 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449317 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.449355 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449911 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450071 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450759 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.450802 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.452824 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1013 13:56:02.453268 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.453309 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.453388 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.453909 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.453944 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.454377 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.454945 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.455002 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.457726 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41047
	I1013 13:56:02.458946 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42105
	I1013 13:56:02.459841 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.460456 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.460471 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.460997 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.461059 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.461190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.461893 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.462087 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.463029 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I1013 13:56:02.463622 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.464283 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.464301 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.467881 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.468766 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1013 13:56:02.468880 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.470158 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.470767 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.470785 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.471160 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
	I1013 13:56:02.471380 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.471463 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.471745 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.472514 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I1013 13:56:02.474011 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.474407 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.475349 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.475371 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.475936 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.477228 1815551 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214022"
	I1013 13:56:02.477291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.477704 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.477781 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.478540 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.478577 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.479296 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.479320 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.479338 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 13:56:02.479831 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.481287 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.482030 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.482191 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 13:56:02.482988 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I1013 13:56:02.482206 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.483218 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.483796 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.484400 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.484415 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.485053 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485131 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485219 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 13:56:02.485513 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.485624 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.488111 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 13:56:02.489703 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 13:56:02.490084 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1013 13:56:02.490663 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.490763 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.491660 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I1013 13:56:02.491817 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492275 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.492498 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.492417 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.492699 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492943 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 13:56:02.493252 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.493468 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.493280 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1013 13:56:02.494093 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.494695 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.495079 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.495408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.497771 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 13:56:02.498011 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.499118 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 13:56:02.499863 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35551
	I1013 13:56:02.500453 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.500464 1815551 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 13:56:02.500482 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.501046 1815551 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:02.501426 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 13:56:02.501453 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502344 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 13:56:02.502360 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 13:56:02.502380 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502511 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:02.502523 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 13:56:02.502539 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502551 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.503704 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 13:56:02.504519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.504549 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.504971 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1013 13:56:02.505044 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34753
	I1013 13:56:02.505476 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.505935 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.506132 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.506402 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 13:56:02.506420 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 13:56:02.506441 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.507553 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.507571 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.510588 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1013 13:56:02.511014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.512055 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.513064 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I1013 13:56:02.513461 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I1013 13:56:02.513806 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I1013 13:56:02.514065 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514237 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46643
	I1013 13:56:02.514353 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514506 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.514833 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.515238 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.515280 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.515776 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.516060 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516139 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516152 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516158 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516229 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I1013 13:56:02.516543 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.516614 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.516690 1815551 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 13:56:02.517007 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.517014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517062 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517467 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.517483 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.517559 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.517562 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
	I1013 13:56:02.518311 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:02.518369 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 13:56:02.518393 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.518516 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.518540 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.518655 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519402 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519519 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.519628 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.519763 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.519831 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521182 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.521199 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1013 13:56:02.521204 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521239 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521254 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.521455 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521645 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.521859 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.522155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.522227 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.525058 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.526886 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.526989 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.527062 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.527172 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.527481 1815551 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:02.527499 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1013 13:56:02.527538 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.527916 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528591 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.530285 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530450 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528734 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530629 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.530633 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528801 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528997 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529220 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I1013 13:56:02.529385 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529699 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.530894 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.530917 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.531013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.529988 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.531257 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531828 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.532069 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.532264 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.532540 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.532554 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531749 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.533563 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 13:56:02.533622 1815551 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 13:56:02.533679 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535465 1815551 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 13:56:02.533809 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I1013 13:56:02.533885 1815551 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 13:56:02.533999 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.534123 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.534155 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535733 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.535024 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.536159 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.536202 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.536302 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.537059 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.537168 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537279 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I1013 13:56:02.537305 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 13:56:02.537322 1815551 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 13:56:02.537342 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.537456 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.537805 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537934 1815551 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:02.537945 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 13:56:02.537970 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538046 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 13:56:02.538056 1815551 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 13:56:02.538070 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538169 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.538186 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.538982 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:02.539022 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 13:56:02.539053 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.540639 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.541678 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.541498 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.541528 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.542401 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.542692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.541543 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.542639 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.542646 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.542566 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.543500 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.544260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.545374 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.545773 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.546359 1815551 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 13:56:02.546363 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 13:56:02.546634 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.546830 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I1013 13:56:02.547953 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.547975 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.548147 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.548267 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.548438 1815551 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:02.548451 1815551 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 13:56:02.548473 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548649 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 13:56:02.548665 1815551 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 13:56:02.548684 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548741 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.548751 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.548789 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 13:56:02.549764 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549774 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.549766 1815551 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 13:56:02.549808 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.549138 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549891 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549914 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549939 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.550519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.550541 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.550650 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551438 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551458 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.551469 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.551478 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551613 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551695 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.551911 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.551979 1815551 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 13:56:02.552033 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552921 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.552947 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.552922 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.552965 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.553027 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553037 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.553282 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.553338 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553396 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.553415 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553448 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553810 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.554101 1815551 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:02.554108 1815551 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 13:56:02.554116 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 13:56:02.554188 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.555708 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:02.555861 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 13:56:02.555886 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555860 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.555999 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.556383 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.556783 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.557013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.557193 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.558058 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.558134 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.559028 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.559068 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.559315 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.559492 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.559902 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.560012 1815551 out.go:179]   - Using image docker.io/busybox:stable
	I1013 13:56:02.560174 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.560282 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560454 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560952 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561186 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561489 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561738 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561760 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561891 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561942 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562049 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562133 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562208 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562304 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.562325 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.562663 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.562854 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.563028 1815551 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 13:56:02.563073 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.563249 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.564627 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:02.564650 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 13:56:02.564672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.568502 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569018 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.569056 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569235 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.569424 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.569582 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.569725 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:03.342481 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:56:03.342511 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 13:56:03.415927 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:03.502503 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:03.509312 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:03.553702 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 13:56:03.553739 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 13:56:03.554436 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 13:56:03.554458 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 13:56:03.558285 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 13:56:03.558305 1815551 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 13:56:03.648494 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:03.699103 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:03.779563 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:03.812678 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 13:56:03.812733 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 13:56:03.829504 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:03.832700 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:03.897242 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 13:56:03.897268 1815551 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 13:56:03.905550 1815551 node_ready.go:35] waiting up to 6m0s for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909125 1815551 node_ready.go:49] node "addons-214022" is "Ready"
	I1013 13:56:03.909165 1815551 node_ready.go:38] duration metric: took 3.564505ms for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909180 1815551 api_server.go:52] waiting for apiserver process to appear ...
	I1013 13:56:03.909241 1815551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 13:56:03.957336 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:04.136232 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:04.201240 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 13:56:04.201271 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 13:56:04.228704 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 13:56:04.228758 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 13:56:04.287683 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.287738 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 13:56:04.507887 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:04.507919 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 13:56:04.641317 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 13:56:04.641349 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 13:56:04.710332 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 13:56:04.710378 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 13:56:04.712723 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 13:56:04.712755 1815551 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 13:56:04.822157 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.887676 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:04.887707 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 13:56:04.968928 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:05.069666 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 13:56:05.069709 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 13:56:05.164254 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 13:56:05.164289 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 13:56:05.171441 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 13:56:05.171470 1815551 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 13:56:05.278956 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:05.595927 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 13:56:05.595960 1815551 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 13:56:05.703182 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 13:56:05.703221 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 13:56:05.763510 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:05.763544 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 13:56:06.065261 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:06.086528 1815551 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.086558 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 13:56:06.241763 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 13:56:06.241791 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 13:56:06.468347 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.948294 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 13:56:06.948335 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 13:56:07.247516 1815551 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.904962804s)
	I1013 13:56:07.247565 1815551 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1013 13:56:07.247597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.83162272s)
	I1013 13:56:07.247662 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.247685 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248180 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248198 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.248211 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.248221 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248546 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:07.248628 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248648 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.509546 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 13:56:07.509581 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 13:56:07.797697 1815551 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214022" context rescaled to 1 replicas
	I1013 13:56:08.114046 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 13:56:08.114078 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 13:56:08.819818 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:08.819848 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 13:56:08.894448 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:09.954565 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 13:56:09.954611 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:09.959281 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.959849 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:09.959886 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.960116 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:09.960364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:09.960569 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:09.960746 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:10.901573 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 13:56:11.367882 1815551 addons.go:238] Setting addon gcp-auth=true in "addons-214022"
	I1013 13:56:11.367958 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:11.368474 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.368530 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.384151 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I1013 13:56:11.384793 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.385376 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.385403 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.385815 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.386578 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.386622 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.401901 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I1013 13:56:11.402499 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.403178 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.403201 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.403629 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.403840 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:11.405902 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:11.406201 1815551 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 13:56:11.406233 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:11.409331 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409779 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:11.409810 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409983 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:11.410205 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:11.410408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:11.410637 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:13.559421 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.0568709s)
	I1013 13:56:13.559481 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559478 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (10.050128857s)
	I1013 13:56:13.559507 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.910967928s)
	I1013 13:56:13.559530 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559544 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559553 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.860416384s)
	I1013 13:56:13.559562 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559571 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559579 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559619 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.780022659s)
	I1013 13:56:13.559648 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559663 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559691 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.726948092s)
	I1013 13:56:13.559546 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559707 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559728 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559764 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.730231108s)
	I1013 13:56:13.559493 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559784 1815551 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (9.650528788s)
	I1013 13:56:13.559801 1815551 api_server.go:72] duration metric: took 11.181129031s to wait for apiserver process to appear ...
	I1013 13:56:13.559808 1815551 api_server.go:88] waiting for apiserver healthz status ...
	I1013 13:56:13.559830 1815551 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I1013 13:56:13.559992 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560020 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560048 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560055 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560063 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560071 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560072 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560083 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560090 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560098 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559785 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560331 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560332 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560338 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560345 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560391 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560394 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560400 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560407 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560410 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560412 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560425 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560447 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560450 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560456 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560461 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560464 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560467 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560491 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560508 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560613 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560624 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560903 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560967 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560976 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560987 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560995 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.561056 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561078 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561085 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561188 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561210 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561237 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561243 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561445 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561462 1815551 addons.go:479] Verifying addon ingress=true in "addons-214022"
	I1013 13:56:13.561689 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561732 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561739 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563431 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.563516 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563493 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.564138 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.564155 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.564164 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.566500 1815551 out.go:179] * Verifying ingress addon...
	I1013 13:56:13.568872 1815551 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 13:56:13.679959 1815551 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I1013 13:56:13.701133 1815551 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 13:56:13.701173 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:13.713292 1815551 api_server.go:141] control plane version: v1.34.1
	I1013 13:56:13.713342 1815551 api_server.go:131] duration metric: took 153.525188ms to wait for apiserver health ...
	I1013 13:56:13.713357 1815551 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 13:56:13.839550 1815551 system_pods.go:59] 15 kube-system pods found
	I1013 13:56:13.839596 1815551 system_pods.go:61] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:13.839608 1815551 system_pods.go:61] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839614 1815551 system_pods.go:61] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839621 1815551 system_pods.go:61] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:13.839626 1815551 system_pods.go:61] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:13.839631 1815551 system_pods.go:61] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:13.839643 1815551 system_pods.go:61] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:13.839649 1815551 system_pods.go:61] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:13.839655 1815551 system_pods.go:61] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:13.839662 1815551 system_pods.go:61] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:13.839676 1815551 system_pods.go:61] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:13.839684 1815551 system_pods.go:61] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:13.839690 1815551 system_pods.go:61] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:13.839698 1815551 system_pods.go:61] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:13.839701 1815551 system_pods.go:61] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:13.839708 1815551 system_pods.go:74] duration metric: took 126.345191ms to wait for pod list to return data ...
	I1013 13:56:13.839738 1815551 default_sa.go:34] waiting for default service account to be created ...
	I1013 13:56:13.942067 1815551 default_sa.go:45] found service account: "default"
	I1013 13:56:13.942106 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.942111 1815551 default_sa.go:55] duration metric: took 102.363552ms for default service account to be created ...
	I1013 13:56:13.942129 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.942130 1815551 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 13:56:13.942465 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.942473 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.942485 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:14.047220 1815551 system_pods.go:86] 15 kube-system pods found
	I1013 13:56:14.047259 1815551 system_pods.go:89] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:14.047272 1815551 system_pods.go:89] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047280 1815551 system_pods.go:89] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047291 1815551 system_pods.go:89] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:14.047297 1815551 system_pods.go:89] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:14.047303 1815551 system_pods.go:89] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:14.047311 1815551 system_pods.go:89] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:14.047316 1815551 system_pods.go:89] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:14.047323 1815551 system_pods.go:89] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:14.047333 1815551 system_pods.go:89] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:14.047343 1815551 system_pods.go:89] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:14.047360 1815551 system_pods.go:89] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:14.047368 1815551 system_pods.go:89] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:14.047377 1815551 system_pods.go:89] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:14.047386 1815551 system_pods.go:89] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:14.047403 1815551 system_pods.go:126] duration metric: took 105.264628ms to wait for k8s-apps to be running ...
	I1013 13:56:14.047417 1815551 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 13:56:14.047478 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 13:56:14.113581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:14.930679 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.130040 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.620233 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.296801 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.658297 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.084581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.640914 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.131818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.760793 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.821597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.86421149s)
	I1013 13:56:18.821631 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.685366971s)
	I1013 13:56:18.821668 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821748 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821787 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821872 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.9996555s)
	W1013 13:56:18.821914 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821934 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.852967871s)
	I1013 13:56:18.821959 1815551 retry.go:31] will retry after 212.802499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821975 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821989 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822111 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.543120613s)
	I1013 13:56:18.822130 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822146 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822157 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822250 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822256 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822259 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822273 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822291 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822289 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822274 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.756980139s)
	I1013 13:56:18.822314 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822260 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822299 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822334 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822345 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822325 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822357 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822331 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822386 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822394 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.354009404s)
	W1013 13:56:18.822426 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822447 1815551 retry.go:31] will retry after 341.080561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822631 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822646 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822660 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822666 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822674 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822684 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822691 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822702 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822726 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822801 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822818 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822890 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.928381136s)
	I1013 13:56:18.822936 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822947 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823037 1815551 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.416805726s)
	I1013 13:56:18.822701 1815551 addons.go:479] Verifying addon registry=true in "addons-214022"
	I1013 13:56:18.823408 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823442 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823449 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823457 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.823463 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823529 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823549 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823554 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823563 1815551 addons.go:479] Verifying addon metrics-server=true in "addons-214022"
	I1013 13:56:18.823922 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823939 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823978 1815551 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.776478568s)
	I1013 13:56:18.826440 1815551 system_svc.go:56] duration metric: took 4.779015598s WaitForService to wait for kubelet
	I1013 13:56:18.826457 1815551 kubeadm.go:586] duration metric: took 16.447782815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:56:18.826480 1815551 node_conditions.go:102] verifying NodePressure condition ...
	I1013 13:56:18.824018 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.824271 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.826526 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.826549 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.826556 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.826909 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:18.827041 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.827056 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.827324 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.827349 1815551 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:18.827631 1815551 out.go:179] * Verifying registry addon...
	I1013 13:56:18.827639 1815551 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214022 service yakd-dashboard -n yakd-dashboard
	
	I1013 13:56:18.828579 1815551 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 13:56:18.830389 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 13:56:18.830649 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 13:56:18.831072 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 13:56:18.831622 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 13:56:18.831641 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 13:56:18.904373 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 13:56:18.904404 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 13:56:18.958203 1815551 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 13:56:18.958240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:18.968879 1815551 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 13:56:18.968905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:18.980574 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:18.980605 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 13:56:18.989659 1815551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 13:56:18.989692 1815551 node_conditions.go:123] node cpu capacity is 2
	I1013 13:56:18.989704 1815551 node_conditions.go:105] duration metric: took 163.213438ms to run NodePressure ...
	I1013 13:56:18.989726 1815551 start.go:241] waiting for startup goroutines ...
	I1013 13:56:19.035462 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:19.044517 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:19.044541 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:19.044887 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:19.044920 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:19.044937 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:19.076791 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:19.115345 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.164325 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:19.492227 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.492514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:19.578775 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.860209 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.860435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.075338 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.338880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.339590 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.591872 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.839272 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.840410 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.147212 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.341334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:21.342792 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.576751 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.816476 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.780960002s)
	W1013 13:56:21.816548 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816583 1815551 retry.go:31] will retry after 241.635364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816594 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.739753765s)
	I1013 13:56:21.816659 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.816682 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.652313132s)
	I1013 13:56:21.816724 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816742 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817049 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817064 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817072 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817094 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817135 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817206 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817222 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817231 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817240 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817331 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817362 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817373 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817637 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.820100 1815551 addons.go:479] Verifying addon gcp-auth=true in "addons-214022"
	I1013 13:56:21.822251 1815551 out.go:179] * Verifying gcp-auth addon...
	I1013 13:56:21.824621 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 13:56:21.835001 1815551 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 13:56:21.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:21.838795 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.840850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.059249 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:22.077627 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.330307 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.336339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.337042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:22.574406 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.832108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.838566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.838826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 13:56:22.914754 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:22.914802 1815551 retry.go:31] will retry after 760.892054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:23.073359 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.329443 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.336062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:23.336518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.576107 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.676911 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:23.852063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.852111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.852394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.075386 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:24.331600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.340818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:24.343374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.572818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:24.620054 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.620094 1815551 retry.go:31] will retry after 1.157322101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.831852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.836023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.836880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.073842 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.328390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.335179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:25.337258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.650194 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.777621 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:25.840280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.846148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.847000 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.073966 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:26.329927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.335473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.335806 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.575967 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:26.717807 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.717838 1815551 retry.go:31] will retry after 1.353453559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.828801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.834019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.836503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.073185 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.329339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.337730 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.338165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.576514 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.828768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.835828 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.836163 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.071440 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:28.372264 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.372321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.373313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:28.374357 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.576799 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.830178 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.839906 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.841861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 13:56:29.026067 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.026119 1815551 retry.go:31] will retry after 2.314368666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.075636 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.331372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.334421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:29.336311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.574567 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.828489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.836190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.836214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.073854 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.328358 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.337153 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:30.572800 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.829360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.836930 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.838278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.115447 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.341310 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:31.386485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.389205 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:31.390131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.594587 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.838151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.859495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.859525 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.074372 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.329175 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.337700 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.340721 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.450731 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109365647s)
	W1013 13:56:32.450775 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.450795 1815551 retry.go:31] will retry after 3.150290355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.578006 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.830600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.835361 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.837984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.072132 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.330611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.336957 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.338768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:33.576304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.832311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.837282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.839687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.073260 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.328435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.335455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.338454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:34.573208 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.829194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.836540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.838519 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.073549 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.329626 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:35.336677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.573553 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.601692 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:35.833491 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.847288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.853015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.073279 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.332575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.339486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.345783 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.575174 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.831613 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.838390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.839346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.873620 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.271867515s)
	W1013 13:56:36.873678 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:36.873707 1815551 retry.go:31] will retry after 2.895058592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:37.073691 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.328849 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.335191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.337850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:37.572952 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.830399 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.834346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.835091 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.074246 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.329068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.334746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:38.336761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.574900 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.829389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.836693 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.838345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.073278 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.329302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.339598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.340006 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:39.572295 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.769464 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:39.829653 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.836342 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.836508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.073770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.329739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.334329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.336269 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.691416 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.831148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.837541 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.839843 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.983908 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.214399822s)
	W1013 13:56:40.983958 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:40.983985 1815551 retry.go:31] will retry after 7.225185704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:41.073163 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.329997 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.335409 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.338433 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:41.666422 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.829493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.835176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.835834 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.072985 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.330254 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.339275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.343430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.574234 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.831039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.835619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.838197 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.072757 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.328191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.337547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.337556 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.573563 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.840684 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.842458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.848748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.073791 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.328352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.335902 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.337655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:44.575764 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.834421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.839189 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.844388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.073743 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.328774 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.336100 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:45.336438 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.601555 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.830165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.835830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.838487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.074421 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.328961 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.334499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.335387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:46.574665 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.829543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.835535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.837472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.076871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.328763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.335050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:47.337454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.572647 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.829879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.834618 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.837273 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.082833 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.210068 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:48.329748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.336813 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.339418 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.577288 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.957818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.960308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.964374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.076388 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.310522 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.100404712s)
	W1013 13:56:49.310569 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.310590 1815551 retry.go:31] will retry after 8.278511579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.333318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.335452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.338043 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.577394 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.830452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.835251 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.837381 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.073417 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.329558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:50.339077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.574733 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.830760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.835530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.077542 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.331547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.335448 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:51.336576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.572984 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.829083 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.837328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.072950 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.329542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.335485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.335539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.572971 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.828509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.836901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.837310 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.074048 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.333265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.335372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.336434 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.574864 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.830933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.838072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.839851 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.074866 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.338983 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.339799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:54.344377 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.574702 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.828114 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.837122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.074420 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:55.329544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:55.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.336305 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:55.578331 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.005987 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.006040 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.008625 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.083827 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.328560 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.335079 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.335136 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.575579 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.830373 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.835033 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.835179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.087195 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.332845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.337372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.338029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.576538 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.589639 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:57.830334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.836937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.838662 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.112247 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.336059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.348974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.350146 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.573280 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.842857 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.842873 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.842888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.924998 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.335308989s)
	W1013 13:56:58.925066 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:58.925097 1815551 retry.go:31] will retry after 13.924020767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:59.072616 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.329181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.335127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.335993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:59.575343 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.830551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.836400 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.837278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.078387 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.333707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.375230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:00.376823 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.572444 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.829334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.835575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.835799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.079304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.330385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.335250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.581487 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.837221 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.837449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.078263 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:02.330056 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:02.339092 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.339093 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:02.577091 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.077029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.077446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.077527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.154987 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.328809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.335973 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.336466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.574053 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.832304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.836898 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.837250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.072871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.329704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.335648 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:04.573740 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.828297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.838545 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.839359 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.073273 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.331167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.337263 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:05.339875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.572747 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.831331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.842003 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.930357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.076706 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.328910 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.336063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.343356 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:06.584114 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.830148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.835936 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.837800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.073829 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.332895 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.335938 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:07.336485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.573658 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.829535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.834609 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.841665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.077534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.328984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.333490 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.335036 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.574315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.830309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.838864 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.075894 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.330037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.335138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.336913 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:09.572525 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.828315 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.835125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.835169 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.074415 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.330449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.334152 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.338372 1815551 kapi.go:107] duration metric: took 51.507291615s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 13:57:10.573600 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.829312 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.834624 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.073690 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.329540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.334164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.575859 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.829406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.834682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.073929 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.328430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.335019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.574762 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.828887 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.833318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.849353 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:13.075935 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:13.329099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.336236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:13.573534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:57:13.587679 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.587745 1815551 retry.go:31] will retry after 13.672716628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.828261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.835435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.073229 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.328789 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.334388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.573428 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.829403 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.834752 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.074458 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.330167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.334526 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.573869 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.828247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.834508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.073598 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.329584 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.335058 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.573770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.834668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.073034 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.330112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.334151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.572834 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.827923 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.834428 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.074227 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.332800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.338122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.574366 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.829944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.835390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.073063 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.330933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.334816 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.578792 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.829059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.834174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.073867 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.328553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.335769 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.577315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.828820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.834111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.074340 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.348186 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.348277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.577133 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.828486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.835130 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.074094 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.329573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.336976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.576302 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.829112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.073276 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.332360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.574812 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.828888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.836976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.073895 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:24.329298 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.345232 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.573291 1815551 kapi.go:107] duration metric: took 1m11.00441945s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 13:57:24.829727 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.328687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.335809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.830863 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.833805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.829658 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.834781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.261314 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:27.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.335935 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.840969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.841226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.331295 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.336284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.567555 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.306188084s)
	W1013 13:57:28.567634 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:28.567738 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.567757 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568060 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568121 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568134 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:57:28.568150 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.568163 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568426 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568464 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568475 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	W1013 13:57:28.568614 1815551 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 13:57:28.828678 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.834833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.329605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:29.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.829667 1815551 kapi.go:107] duration metric: took 1m8.005042215s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 13:57:29.831603 1815551 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214022 cluster.
	I1013 13:57:29.832969 1815551 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 13:57:29.834368 1815551 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 13:57:29.835165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.335102 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.834820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.337927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.836162 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.334652 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.335329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.836940 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.335265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.835299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.334493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.835958 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.336901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.836037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.334865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.335331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.835376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.334760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.835451 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.335213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.835487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.334559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.835709 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.336510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.835078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.334427 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.835800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.836213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.835870 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.336474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.835258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.335636 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.335125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.835336 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.334300 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.834511 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.334734 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.336483 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.835357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.334098 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.834039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.336018 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.836261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.334061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.834919 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.334649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.835154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.336354 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.834937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.335025 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.835808 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.835220 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.335287 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.336327 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.836514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.835391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.335754 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.834954 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.337125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.836950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.335741 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.835238 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.334514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.836800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.335199 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.834223 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.334374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.834313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.335017 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.836739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.836138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.837760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.335601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.834423 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.335277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.334190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.835779 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.335566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.834803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.335076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.834352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.337145 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.836318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.335627 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.834879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.335150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.834450 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.335022 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.836226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.335160 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.836271 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.335103 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.335568 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.836839 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.335318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.836164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.334826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.835127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.336865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.836135 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.336673 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.835150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.334589 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.834578 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.335334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.835296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.335639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.836101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.334964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.835761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.335325 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.836391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.335041 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.836020 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.335603 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.834446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.336822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.835728 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.834134 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.335154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.836561 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.336212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.835791 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.335558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.835276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.335841 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.836019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.334744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.834701 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.335446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.835594 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.337105 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.834479 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.335535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.835194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.335256 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.336078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.835454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.335291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.835631 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.336375 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.835517 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.335533 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.835668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.334675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.836765 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.335738 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.835614 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.334992 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.834761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.835039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.335024 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.835393 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.335510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.834835 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.335247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.835193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.337646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.334671 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.835950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.335072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.835262 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.336068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.838250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.336473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.834196 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.835516 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.336890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.336117 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.835027 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.336076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.835382 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.334500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.835763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.335780 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.834829 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.335922 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.835807 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.335268 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.835042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.334861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.835742 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.835542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.336308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.834819 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.334458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.834430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.334302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.834698 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.335242 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.837355 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.334901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.835822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.335481 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.835077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.335379 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.835858 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.335030 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.334406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.835970 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.336845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.835639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.334566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.834610 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.335758 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.834181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.335230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.836521 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.335115 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.834296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.334011 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.835572 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.334655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.837467 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.334547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.835937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.335478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.334801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.834872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.834089 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.334927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.835775 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.334557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.834110 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.336120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.835608 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.338054 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.835852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.335214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.835500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.334478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.835206 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.335016 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.835509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.334080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.835482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.336619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.835408 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.334489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.834778 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.334764 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.836472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.834969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.335466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.336616 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.835557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.335389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.837280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.335407 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.835989 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.334416 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.336883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.835437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.334771 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.836376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.334601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.334699 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.834770 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.334874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.835696 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.335335 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.836061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.334551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.836309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.335167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.835702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.334763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.335505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.835798 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.335506 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.836329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.335321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.834801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.334908 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.835943 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.335962 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.836396 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.335654 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.835633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.335803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.835579 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.334633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.335151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.835600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.336050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.835564 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.335649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.335190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.834455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.334544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.835370 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.834672 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.334781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.834666 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.835748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.335284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.835158 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.337417 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.835644 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.335243 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.835634 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.335832 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.836076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.336097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.835499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.334133 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.334598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.835174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.335615 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.835346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.334875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.835362 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.335392 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.835890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.336384 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.835565 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.334702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.836069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.335345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.835340 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.338240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.836180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.336383 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.835503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.836328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.333988 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.335216 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.836465 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.836108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.336180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.836086 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.335099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.836475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.334621 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.334707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.835907 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.336386 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.834665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.334390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.333981 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.836628 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.335276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.334588 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.835824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.338905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.836639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.335704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.835552 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.334682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.335635 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.835001 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.334830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.834874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.336549 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.838494 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.335810 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.834944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.335374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.834675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.335833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.836291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.334291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.835818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.335302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.836497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.334553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.834695 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.335580 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.336475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.834974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.335889 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.835181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.336380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.336442 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.834531 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.335397 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.834456 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.337231 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.834677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.335412 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.336539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.835527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.835688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.335233 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.335877 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.836559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.335297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.837219 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.336121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.834649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.336482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.335108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.335574 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.335903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.835661 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.337729 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.835944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.834840 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.336497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.835735 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.336414 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.835122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.335039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.835080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.336069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.335177 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.835351 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.335126 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.835180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.835406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.334198 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.336224 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.836107 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.336440 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.336101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.835094 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.334705 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.836586 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.335865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.336836 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.836236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.334530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.836132 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.834953 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.336330 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.834343 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.334470 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.835865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.336394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.834746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.336193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.835282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.334495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.835755 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.835573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.335010 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.835070 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.337081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.836917 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.336075 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.836303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.335543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.336304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.835123 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.334303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.836073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.337121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.335474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.835147 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.834679 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.334975 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.835505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.335547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.834320 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.337072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.835338 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.334677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.835088 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.334605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.834688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.835956 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.336504 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.836995 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.335212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.834385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.335476 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.835502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.836012 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.335744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.834380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.335240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.835337 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.335893 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.834524 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.334081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.835413 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.334814 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.834505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.335015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.835005 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.336275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.835387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.335267 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.835234 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.335689 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.336968 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.835611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.835927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.337411 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.834441 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.335062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.835993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.336191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.831884 1815551 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1013 14:02:18.831927 1815551 kapi.go:107] duration metric: took 6m0.001279478s to wait for kubernetes.io/minikube-addons=registry ...
	W1013 14:02:18.832048 1815551 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1013 14:02:18.834028 1815551 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, default-storageclass, volcano, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, csi-hostpath-driver, ingress, gcp-auth
	I1013 14:02:18.835547 1815551 addons.go:514] duration metric: took 6m16.456841938s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin default-storageclass volcano metrics-server yakd storage-provisioner-rancher volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I1013 14:02:18.835619 1815551 start.go:246] waiting for cluster config update ...
	I1013 14:02:18.835653 1815551 start.go:255] writing updated cluster config ...
	I1013 14:02:18.835985 1815551 ssh_runner.go:195] Run: rm -f paused
	I1013 14:02:18.844672 1815551 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:18.850989 1815551 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.858822 1815551 pod_ready.go:94] pod "coredns-66bc5c9577-h4thg" is "Ready"
	I1013 14:02:18.858851 1815551 pod_ready.go:86] duration metric: took 7.830127ms for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.861510 1815551 pod_ready.go:83] waiting for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.866947 1815551 pod_ready.go:94] pod "etcd-addons-214022" is "Ready"
	I1013 14:02:18.866978 1815551 pod_ready.go:86] duration metric: took 5.438269ms for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.870108 1815551 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.876071 1815551 pod_ready.go:94] pod "kube-apiserver-addons-214022" is "Ready"
	I1013 14:02:18.876101 1815551 pod_ready.go:86] duration metric: took 5.952573ms for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.879444 1815551 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.250700 1815551 pod_ready.go:94] pod "kube-controller-manager-addons-214022" is "Ready"
	I1013 14:02:19.250743 1815551 pod_ready.go:86] duration metric: took 371.273475ms for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.452146 1815551 pod_ready.go:83] waiting for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.850363 1815551 pod_ready.go:94] pod "kube-proxy-m9kg9" is "Ready"
	I1013 14:02:19.850396 1815551 pod_ready.go:86] duration metric: took 398.220518ms for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.050567 1815551 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449725 1815551 pod_ready.go:94] pod "kube-scheduler-addons-214022" is "Ready"
	I1013 14:02:20.449765 1815551 pod_ready.go:86] duration metric: took 399.169231ms for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449779 1815551 pod_ready.go:40] duration metric: took 1.605053066s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:20.499765 1815551 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 14:02:20.501422 1815551 out.go:179] * Done! kubectl is now configured to use "addons-214022" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	4b9c2b1e8388b       56cc512116c8f       5 minutes ago       Running             busybox                                  0                   c2017033bd492       busybox
	d6a3c830fdead       1bec18b3728e7       16 minutes ago      Running             controller                               0                   b82d6ab22225e       ingress-nginx-controller-9cc49f96f-7jf8g
	dc9eac6946abb       738351fd438f0       16 minutes ago      Running             csi-snapshotter                          0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	caf59fa52cf6c       931dbfd16f87c       16 minutes ago      Running             csi-provisioner                          0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	dcdb3cedeedc5       e899260153aed       16 minutes ago      Running             liveness-probe                           0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	20320037960be       e255e073c508c       16 minutes ago      Running             hostpath                                 0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	251c9387cb3f1       88ef14a257f42       16 minutes ago      Running             node-driver-registrar                    0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	4bf53d30ff2bf       19a639eda60f0       16 minutes ago      Running             csi-resizer                              0                   38173b2da332e       csi-hostpath-resizer-0
	da92c998f6d36       a1ed5895ba635       16 minutes ago      Running             csi-external-health-monitor-controller   0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	fdb740423cae7       aa61ee9c70bc4       16 minutes ago      Running             volume-snapshot-controller               0                   d87f7092f76cb       snapshot-controller-7d9fbc56b8-fcqg8
	d9300160a8179       59cbb42146a37       16 minutes ago      Running             csi-attacher                             0                   1571308a93146       csi-hostpath-attacher-0
	59dcea13b91a7       aa61ee9c70bc4       16 minutes ago      Running             volume-snapshot-controller               0                   fc7a88bf2bbfa       snapshot-controller-7d9fbc56b8-pnqwn
	ac9ca79606b04       8c217da6734db       16 minutes ago      Exited              patch                                    0                   82e54969531ac       ingress-nginx-admission-patch-kvlpb
	fc2247488ceef       8c217da6734db       16 minutes ago      Exited              create                                   0                   249a7d7c465c4       ingress-nginx-admission-create-rn6ng
	ade8e5a3e89a5       38dca7434d5f2       17 minutes ago      Running             gadget                                   0                   cd47cb2e122c6       gadget-lrthv
	427e1841635f7       e16d1e3a10667       17 minutes ago      Running             local-path-provisioner                   0                   b07165834017e       local-path-provisioner-648f6765c9-txczb
	55e4c7d9441ba       b1c9f9ef5f0c2       17 minutes ago      Running             registry-proxy                           0                   dbfd8a2965678       registry-proxy-qdl2b
	11373ec0dad23       b6ab53fbfedaa       17 minutes ago      Running             minikube-ingress-dns                     0                   25d666aa48ee6       kube-ingress-dns-minikube
	61d2e3b41e535       6e38f40d628db       17 minutes ago      Running             storage-provisioner                      0                   c3fcdfcb3c777       storage-provisioner
	e93bcf6b41d34       d5e667c0f2bb6       17 minutes ago      Running             amd-gpu-device-plugin                    0                   dd63ea4bfdd23       amd-gpu-device-plugin-k6tpl
	836109d2ab5d3       52546a367cc9e       17 minutes ago      Running             coredns                                  0                   475cb9ba95a73       coredns-66bc5c9577-h4thg
	0daa3279505d6       fc25172553d79       17 minutes ago      Running             kube-proxy                               0                   85474e9f38355       kube-proxy-m9kg9
	05cee8f966b49       c80c8dbafe7dd       18 minutes ago      Running             kube-controller-manager                  0                   03c96ff8163c4       kube-controller-manager-addons-214022
	b4ca1f4c451a7       5f1f5298c888d       18 minutes ago      Running             etcd                                     0                   f69d756c4a41d       etcd-addons-214022
	84834930aaa27       7dd6aaa1717ab       18 minutes ago      Running             kube-scheduler                           0                   246bc566c0147       kube-scheduler-addons-214022
	da79537fc9aee       c3994bc696102       18 minutes ago      Running             kube-apiserver                           0                   6b21f01e5cdd5       kube-apiserver-addons-214022
	
	
	==> containerd <==
	Oct 13 14:13:09 addons-214022 containerd[816]: time="2025-10-13T14:13:09.454967280Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:13:09 addons-214022 containerd[816]: time="2025-10-13T14:13:09.553038548Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:13:09 addons-214022 containerd[816]: time="2025-10-13T14:13:09.553163582Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	Oct 13 14:13:10 addons-214022 containerd[816]: time="2025-10-13T14:13:10.400319743Z" level=info msg="StopPodSandbox for \"8d4727e441ea0d9a8cd66fe98cd1fb15acaedffb6b2f9451261d256f79922433\""
	Oct 13 14:13:10 addons-214022 containerd[816]: time="2025-10-13T14:13:10.517555016Z" level=info msg="shim disconnected" id=8d4727e441ea0d9a8cd66fe98cd1fb15acaedffb6b2f9451261d256f79922433 namespace=k8s.io
	Oct 13 14:13:10 addons-214022 containerd[816]: time="2025-10-13T14:13:10.517589540Z" level=warning msg="cleaning up after shim disconnected" id=8d4727e441ea0d9a8cd66fe98cd1fb15acaedffb6b2f9451261d256f79922433 namespace=k8s.io
	Oct 13 14:13:10 addons-214022 containerd[816]: time="2025-10-13T14:13:10.517599868Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 14:13:10 addons-214022 containerd[816]: time="2025-10-13T14:13:10.663997067Z" level=info msg="TearDown network for sandbox \"8d4727e441ea0d9a8cd66fe98cd1fb15acaedffb6b2f9451261d256f79922433\" successfully"
	Oct 13 14:13:10 addons-214022 containerd[816]: time="2025-10-13T14:13:10.664069498Z" level=info msg="StopPodSandbox for \"8d4727e441ea0d9a8cd66fe98cd1fb15acaedffb6b2f9451261d256f79922433\" returns successfully"
	Oct 13 14:13:40 addons-214022 containerd[816]: time="2025-10-13T14:13:40.855460975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7,Uid:e2750ad6-dfb7-4833-ac76-165ede8de999,Namespace:local-path-storage,Attempt:0,}"
	Oct 13 14:13:41 addons-214022 containerd[816]: time="2025-10-13T14:13:41.007192972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 13 14:13:41 addons-214022 containerd[816]: time="2025-10-13T14:13:41.007310891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 13 14:13:41 addons-214022 containerd[816]: time="2025-10-13T14:13:41.007327198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 13 14:13:41 addons-214022 containerd[816]: time="2025-10-13T14:13:41.007961767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 13 14:13:41 addons-214022 containerd[816]: time="2025-10-13T14:13:41.089075808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7,Uid:e2750ad6-dfb7-4833-ac76-165ede8de999,Namespace:local-path-storage,Attempt:0,} returns sandbox id \"f2ab2494666e0f7079440b28453bfbf86d9c601996785ccb762e7664ae7509d3\""
	Oct 13 14:13:41 addons-214022 containerd[816]: time="2025-10-13T14:13:41.092785371Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 13 14:13:41 addons-214022 containerd[816]: time="2025-10-13T14:13:41.096003280Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:13:41 addons-214022 containerd[816]: time="2025-10-13T14:13:41.193204332Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:13:41 addons-214022 containerd[816]: time="2025-10-13T14:13:41.290528603Z" level=error msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" failed" error="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:13:41 addons-214022 containerd[816]: time="2025-10-13T14:13:41.290580508Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=10979"
	Oct 13 14:13:53 addons-214022 containerd[816]: time="2025-10-13T14:13:53.380721024Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 13 14:13:53 addons-214022 containerd[816]: time="2025-10-13T14:13:53.384858991Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:13:53 addons-214022 containerd[816]: time="2025-10-13T14:13:53.454189666Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:13:53 addons-214022 containerd[816]: time="2025-10-13T14:13:53.542453392Z" level=error msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" failed" error="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:13:53 addons-214022 containerd[816]: time="2025-10-13T14:13:53.542577750Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=10979"
	
	
	==> coredns [836109d2ab5d3098ccc6f029d103e56da702d50a57e73f14a97ae3b019a5fa1c] <==
	[INFO] 10.244.0.8:48315 - 10752 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000078282s
	[INFO] 10.244.0.8:41512 - 46997 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000258451s
	[INFO] 10.244.0.8:41512 - 14759 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000492187s
	[INFO] 10.244.0.8:41512 - 20124 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000199597s
	[INFO] 10.244.0.8:41512 - 64086 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00021521s
	[INFO] 10.244.0.8:41512 - 31070 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000130432s
	[INFO] 10.244.0.8:41512 - 13022 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000190866s
	[INFO] 10.244.0.8:41512 - 29768 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000146051s
	[INFO] 10.244.0.8:41512 - 16294 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000439189s
	[INFO] 10.244.0.8:56911 - 20541 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000195868s
	[INFO] 10.244.0.8:56911 - 39585 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000558652s
	[INFO] 10.244.0.8:56911 - 18306 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000202765s
	[INFO] 10.244.0.8:56911 - 41479 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000533304s
	[INFO] 10.244.0.8:56911 - 61965 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000128008s
	[INFO] 10.244.0.8:56911 - 5221 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000321899s
	[INFO] 10.244.0.8:56911 - 54863 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000091528s
	[INFO] 10.244.0.8:56911 - 34496 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000283231s
	[INFO] 10.244.0.8:59476 - 61940 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000156467s
	[INFO] 10.244.0.8:59476 - 20588 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000174574s
	[INFO] 10.244.0.8:59476 - 64555 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084586s
	[INFO] 10.244.0.8:59476 - 64921 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00008812s
	[INFO] 10.244.0.8:59476 - 38746 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000159128s
	[INFO] 10.244.0.8:59476 - 14992 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000126009s
	[INFO] 10.244.0.8:59476 - 52667 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000102702s
	[INFO] 10.244.0.8:59476 - 20771 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000494773s
	
	
	==> describe nodes <==
	Name:               addons-214022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=addons-214022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214022
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-214022"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 13:55:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:13:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-214022
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 c368161c275346d2a9ea3f8a7f4ac862
	  System UUID:                c368161c-2753-46d2-a9ea-3f8a7f4ac862
	  Boot ID:                    687454d4-3e74-47c7-85c1-524150a13269
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  gadget                      gadget-lrthv                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-7jf8g                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         17m
	  kube-system                 amd-gpu-device-plugin-k6tpl                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-66bc5c9577-h4thg                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     17m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpathplugin-4jxqs                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-addons-214022                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         17m
	  kube-system                 kube-apiserver-addons-214022                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-214022                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-m9kg9                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-214022                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 registry-66898fdd98-qpt8q                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 registry-creds-764b6fb674-rsjlm                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 registry-proxy-qdl2b                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-7d9fbc56b8-fcqg8                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-7d9fbc56b8-pnqwn                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  local-path-storage          helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-txczb                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeReady                17m                kubelet          Node addons-214022 status is now: NodeReady
	  Normal  RegisteredNode           17m                node-controller  Node addons-214022 event: Registered Node addons-214022 in Controller
	
	
	==> dmesg <==
	[  +0.000102] kauditd_printk_skb: 285 callbacks suppressed
	[  +1.171734] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.188548] kauditd_printk_skb: 340 callbacks suppressed
	[ +10.023317] kauditd_printk_skb: 173 callbacks suppressed
	[ +11.926739] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.270838] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.901459] kauditd_printk_skb: 26 callbacks suppressed
	[Oct13 13:57] kauditd_printk_skb: 117 callbacks suppressed
	[  +1.255372] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.000037] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.136427] kauditd_printk_skb: 50 callbacks suppressed
	[  +4.193430] kauditd_printk_skb: 68 callbacks suppressed
	[Oct13 14:08] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000059] kauditd_printk_skb: 65 callbacks suppressed
	[ +12.058507] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000136] kauditd_printk_skb: 22 callbacks suppressed
	[Oct13 14:09] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.303382] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.474208] kauditd_printk_skb: 49 callbacks suppressed
	[Oct13 14:10] kauditd_printk_skb: 90 callbacks suppressed
	[Oct13 14:11] kauditd_printk_skb: 9 callbacks suppressed
	[ +15.690633] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.656333] kauditd_printk_skb: 21 callbacks suppressed
	[Oct13 14:13] kauditd_printk_skb: 44 callbacks suppressed
	[  +0.000051] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [b4ca1f4c451a74c7ea64ca0e34512e160fbd260fd3969afb6e67fca08f49102b] <==
	{"level":"info","ts":"2025-10-13T13:57:03.066329Z","caller":"traceutil/trace.go:172","msg":"trace[1337303940] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"235.769671ms","start":"2025-10-13T13:57:02.830551Z","end":"2025-10-13T13:57:03.066321Z","steps":["trace[1337303940] 'range keys from in-memory index tree'  (duration: 235.56325ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.066781Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.221636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:03.066824Z","caller":"traceutil/trace.go:172","msg":"trace[1790166720] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"236.26612ms","start":"2025-10-13T13:57:02.830551Z","end":"2025-10-13T13:57:03.066818Z","steps":["trace[1790166720] 'range keys from in-memory index tree'  (duration: 236.097045ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315015Z","caller":"traceutil/trace.go:172","msg":"trace[940649486] linearizableReadLoop","detail":"{readStateIndex:1392; appliedIndex:1392; }","duration":"127.017691ms","start":"2025-10-13T13:57:23.187982Z","end":"2025-10-13T13:57:23.314999Z","steps":["trace[940649486] 'read index received'  (duration: 127.006943ms)","trace[940649486] 'applied index is now lower than readState.Index'  (duration: 4.937µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T13:57:23.315177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.178772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:23.315206Z","caller":"traceutil/trace.go:172","msg":"trace[2128069664] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:1356; }","duration":"127.222714ms","start":"2025-10-13T13:57:23.187978Z","end":"2025-10-13T13:57:23.315201Z","steps":["trace[2128069664] 'agreement among raft nodes before linearized reading'  (duration: 127.149155ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315263Z","caller":"traceutil/trace.go:172","msg":"trace[1733438696] transaction","detail":"{read_only:false; response_revision:1357; number_of_response:1; }","duration":"135.233261ms","start":"2025-10-13T13:57:23.180019Z","end":"2025-10-13T13:57:23.315253Z","steps":["trace[1733438696] 'process raft request'  (duration: 135.141996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:05:52.467650Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1907}
	{"level":"info","ts":"2025-10-13T14:05:52.575208Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1907,"took":"105.568434ms","hash":1304879421,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4886528,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2025-10-13T14:05:52.575710Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1304879421,"revision":1907,"compact-revision":-1}
	{"level":"info","ts":"2025-10-13T14:09:13.842270Z","caller":"traceutil/trace.go:172","msg":"trace[1885689359] linearizableReadLoop","detail":"{readStateIndex:3177; appliedIndex:3177; }","duration":"274.560471ms","start":"2025-10-13T14:09:13.567649Z","end":"2025-10-13T14:09:13.842209Z","steps":["trace[1885689359] 'read index received'  (duration: 274.551109ms)","trace[1885689359] 'applied index is now lower than readState.Index'  (duration: 8.253µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:13.906716Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.580668ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T14:09:13.906823Z","caller":"traceutil/trace.go:172","msg":"trace[1704629397] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2982; }","duration":"187.730839ms","start":"2025-10-13T14:09:13.719077Z","end":"2025-10-13T14:09:13.906808Z","steps":["trace[1704629397] 'range keys from in-memory index tree'  (duration: 187.538324ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T14:09:13.906973Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"339.314013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ipaddresses/10.101.151.157\" limit:1 ","response":"range_response_count:1 size:604"}
	{"level":"info","ts":"2025-10-13T14:09:13.907424Z","caller":"traceutil/trace.go:172","msg":"trace[692800306] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2983; }","duration":"346.864291ms","start":"2025-10-13T14:09:13.560497Z","end":"2025-10-13T14:09:13.907361Z","steps":["trace[692800306] 'process raft request'  (duration: 281.825137ms)","trace[692800306] 'compare'  (duration: 64.828079ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T14:09:13.907508Z","caller":"traceutil/trace.go:172","msg":"trace[107743050] range","detail":"{range_begin:/registry/ipaddresses/10.101.151.157; range_end:; response_count:1; response_revision:2982; }","duration":"339.484538ms","start":"2025-10-13T14:09:13.567635Z","end":"2025-10-13T14:09:13.907120Z","steps":["trace[107743050] 'agreement among raft nodes before linearized reading'  (duration: 274.852745ms)","trace[107743050] 'range keys from in-memory index tree'  (duration: 64.106294ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:13.907801Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.567617Z","time spent":"339.918526ms","remote":"127.0.0.1:33944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":627,"request content":"key:\"/registry/ipaddresses/10.101.151.157\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T14:09:13.908101Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.560488Z","time spent":"346.985335ms","remote":"127.0.0.1:33882","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":61,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" mod_revision:2971 > success:<request_delete_range:<key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" > > failure:<request_range:<key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" > >"}
	{"level":"info","ts":"2025-10-13T14:09:13.908220Z","caller":"traceutil/trace.go:172","msg":"trace[2073246272] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2983; }","duration":"347.573522ms","start":"2025-10-13T14:09:13.560640Z","end":"2025-10-13T14:09:13.908213Z","steps":["trace[2073246272] 'process raft request'  (duration: 346.576205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T14:09:13.908282Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.560629Z","time spent":"347.615581ms","remote":"127.0.0.1:33684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":59,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/metrics-server\" mod_revision:2972 > success:<request_delete_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > > failure:<request_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > >"}
	{"level":"warn","ts":"2025-10-13T14:09:13.910053Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.064409ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T14:09:13.910727Z","caller":"traceutil/trace.go:172","msg":"trace[1060924441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2983; }","duration":"217.741397ms","start":"2025-10-13T14:09:13.692976Z","end":"2025-10-13T14:09:13.910718Z","steps":["trace[1060924441] 'agreement among raft nodes before linearized reading'  (duration: 216.722483ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:10:52.476707Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2368}
	{"level":"info","ts":"2025-10-13T14:10:52.510907Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2368,"took":"32.98551ms","hash":1037835104,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5537792,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-13T14:10:52.510982Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1037835104,"revision":2368,"compact-revision":1907}
	
	
	==> kernel <==
	 14:13:55 up 18 min,  0 users,  load average: 0.64, 0.87, 0.75
	Linux addons-214022 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [da79537fc9aee4eda997318cc0aeef07f5a4e3bbd4aed4282ff9e486eecb0cd7] <==
	I1013 14:08:25.024102       1 handler.go:285] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.588117       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.763275       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1013 14:08:25.806287       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1013 14:08:25.836075       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.910579       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.938831       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W1013 14:08:26.095661       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1013 14:08:26.314291       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:26.607638       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	I1013 14:08:26.637481       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:26.689652       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1013 14:08:26.941141       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1013 14:08:26.941574       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1013 14:08:26.961310       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	I1013 14:08:27.080209       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:27.138121       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1013 14:08:28.080963       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1013 14:08:28.086493       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1013 14:08:45.022422       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:40132: use of closed network connection
	E1013 14:08:45.229592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:40168: use of closed network connection
	I1013 14:08:54.741628       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.41.148"}
	I1013 14:09:48.903970       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1013 14:11:31.775897       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1013 14:11:31.990340       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.79.22"}
	
	
	==> kube-controller-manager [05cee8f966b4938e3d1606d404d9401b9949f288ba68c08a76c3856610945ee7] <==
	E1013 14:13:03.521964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:13.119350       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:13.120829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:14.690109       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:14.691872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:19.960980       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:19.962245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:20.663243       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:20.664832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:30.873609       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:30.874881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:37.665130       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:37.666484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:46.307845       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:46.309219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:48.665286       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:48.666987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:49.693673       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:49.695558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:50.585520       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:50.586838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:53.455709       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:53.457704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:13:54.112800       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:13:54.114363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [0daa3279505d674c83f3e6813f82b58744dbeede0c9d8a5f5e902c9d9cca7441] <==
	I1013 13:56:04.284946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 13:56:04.385972       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 13:56:04.386554       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.214"]
	E1013 13:56:04.387583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 13:56:04.791284       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 13:56:04.792086       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 13:56:04.792127       1 server_linux.go:132] "Using iptables Proxier"
	I1013 13:56:04.830526       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 13:56:04.832819       1 server.go:527] "Version info" version="v1.34.1"
	I1013 13:56:04.832853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 13:56:04.853725       1 config.go:200] "Starting service config controller"
	I1013 13:56:04.853757       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 13:56:04.853901       1 config.go:106] "Starting endpoint slice config controller"
	I1013 13:56:04.853927       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 13:56:04.854547       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 13:56:04.854575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 13:56:04.862975       1 config.go:309] "Starting node config controller"
	I1013 13:56:04.863007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 13:56:04.863015       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 13:56:04.956286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 13:56:04.956330       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 13:56:04.957110       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [84834930aaa277a8e849b685332e6fb4b453bbc88da065fb1d682e6c39de1c89] <==
	E1013 13:55:54.569998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:54.570036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:54.570113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:54.570148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:54.570176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 13:55:54.570210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:54.570246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 13:55:54.569635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:54.571687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.412211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:55.434014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 13:55:55.466581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 13:55:55.489914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.548770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:55.605071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 13:55:55.677154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:55.682700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 13:55:55.710259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:55.717675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 13:55:55.763499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 13:55:55.780817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:55.877364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:55.895577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 13:55:55.926098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1013 13:55:58.161609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 14:13:27 addons-214022 kubelet[1511]: I1013 14:13:27.376954    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:13:27 addons-214022 kubelet[1511]: E1013 14:13:27.378559    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:13:29 addons-214022 kubelet[1511]: E1013 14:13:29.376038    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:13:31 addons-214022 kubelet[1511]: I1013 14:13:31.377329    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:13:34 addons-214022 kubelet[1511]: E1013 14:13:34.378260    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:13:35 addons-214022 kubelet[1511]: I1013 14:13:35.376061    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qdl2b" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:13:40 addons-214022 kubelet[1511]: I1013 14:13:40.708932    1511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/e2750ad6-dfb7-4833-ac76-165ede8de999-script\") pod \"helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7\" (UID: \"e2750ad6-dfb7-4833-ac76-165ede8de999\") " pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7"
	Oct 13 14:13:40 addons-214022 kubelet[1511]: I1013 14:13:40.709009    1511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8b2w\" (UniqueName: \"kubernetes.io/projected/e2750ad6-dfb7-4833-ac76-165ede8de999-kube-api-access-h8b2w\") pod \"helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7\" (UID: \"e2750ad6-dfb7-4833-ac76-165ede8de999\") " pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7"
	Oct 13 14:13:40 addons-214022 kubelet[1511]: I1013 14:13:40.709041    1511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/e2750ad6-dfb7-4833-ac76-165ede8de999-data\") pod \"helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7\" (UID: \"e2750ad6-dfb7-4833-ac76-165ede8de999\") " pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7"
	Oct 13 14:13:41 addons-214022 kubelet[1511]: E1013 14:13:41.290918    1511 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 13 14:13:41 addons-214022 kubelet[1511]: E1013 14:13:41.290988    1511 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 13 14:13:41 addons-214022 kubelet[1511]: E1013 14:13:41.291064    1511 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7_local-path-storage(e2750ad6-dfb7-4833-ac76-165ede8de999): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:13:41 addons-214022 kubelet[1511]: E1013 14:13:41.291121    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7" podUID="e2750ad6-dfb7-4833-ac76-165ede8de999"
	Oct 13 14:13:41 addons-214022 kubelet[1511]: E1013 14:13:41.376036    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:13:42 addons-214022 kubelet[1511]: E1013 14:13:42.141602    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7" podUID="e2750ad6-dfb7-4833-ac76-165ede8de999"
	Oct 13 14:13:42 addons-214022 kubelet[1511]: I1013 14:13:42.375462    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:13:42 addons-214022 kubelet[1511]: E1013 14:13:42.376561    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:13:47 addons-214022 kubelet[1511]: E1013 14:13:47.377487    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e84718ad-4d7b-4ca8-aeb7-59e4d2740bd4"
	Oct 13 14:13:52 addons-214022 kubelet[1511]: E1013 14:13:52.375743    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:13:53 addons-214022 kubelet[1511]: I1013 14:13:53.376050    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:13:53 addons-214022 kubelet[1511]: E1013 14:13:53.378931    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:13:53 addons-214022 kubelet[1511]: E1013 14:13:53.542733    1511 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 13 14:13:53 addons-214022 kubelet[1511]: E1013 14:13:53.542795    1511 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 13 14:13:53 addons-214022 kubelet[1511]: E1013 14:13:53.542914    1511 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7_local-path-storage(e2750ad6-dfb7-4833-ac76-165ede8de999): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:13:53 addons-214022 kubelet[1511]: E1013 14:13:53.543021    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7" podUID="e2750ad6-dfb7-4833-ac76-165ede8de999"
	
	
	==> storage-provisioner [61d2e3b41e535c2d6e45412739c6b7e475d5a6aef5eb620041ffb9e4f7f53d5d] <==
	W1013 14:13:29.756522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:31.761186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:31.770214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:33.775071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:33.781780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:35.785924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:35.792216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:37.797340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:37.802694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:39.806617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:39.815536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:41.821662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:41.831252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:43.834429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:43.840539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:45.843720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:45.849893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:47.854670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:47.861329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:49.865859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:49.872764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:51.877154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:51.883677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:53.886899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:13:53.892786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7: exit status 1 (97.474971ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214022/192.168.39.214
	Start Time:       Mon, 13 Oct 2025 14:11:31 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qhpgc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qhpgc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m25s                default-scheduler  Successfully assigned default/nginx to addons-214022
	  Normal   Pulling    47s (x4 over 2m24s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     47s (x4 over 2m24s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     47s (x4 over 2m24s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x8 over 2m23s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     9s (x8 over 2m23s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214022/192.168.39.214
	Start Time:       Mon, 13 Oct 2025 14:09:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cpq8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-cpq8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m41s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-214022
	  Normal   Pulling    103s (x5 over 4m41s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     103s (x5 over 4m40s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     103s (x5 over 4m40s)  kubelet            Error: ErrImagePull
	  Warning  Failed     54s (x15 over 4m40s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4s (x19 over 4m40s)   kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wxvk (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-8wxvk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rn6ng" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kvlpb" not found
	Error from server (NotFound): pods "registry-66898fdd98-qpt8q" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-rsjlm" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214022 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.994482576s)
--- FAIL: TestAddons/parallel/LocalPath (345.74s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (128.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-bl6xb" [9b696edf-33b0-4b8c-a0c6-b17b9bb067fa] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
addons_test.go:1047: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
addons_test.go:1047: TestAddons/parallel/Yakd: showing logs for failed pods as of 2025-10-13 14:11:22.96681425 +0000 UTC m=+973.907372624
addons_test.go:1047: (dbg) Run:  kubectl --context addons-214022 describe po yakd-dashboard-5ff678cb9-bl6xb -n yakd-dashboard
addons_test.go:1047: (dbg) kubectl --context addons-214022 describe po yakd-dashboard-5ff678cb9-bl6xb -n yakd-dashboard:
Name:             yakd-dashboard-5ff678cb9-bl6xb
Namespace:        yakd-dashboard
Priority:         0
Service Account:  yakd-dashboard
Node:             addons-214022/192.168.39.214
Start Time:       Mon, 13 Oct 2025 13:56:11 +0000
Labels:           app.kubernetes.io/instance=yakd-dashboard
app.kubernetes.io/name=yakd-dashboard
gcp-auth-skip-secret=true
pod-template-hash=5ff678cb9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/yakd-dashboard-5ff678cb9
Containers:
yakd:
Container ID:   
Image:          docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624
Image ID:       
Port:           8080/TCP (http)
Host Port:      0/TCP (http)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
memory:  256Mi
Requests:
memory:   128Mi
Liveness:   http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness:  http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Environment:
KUBERNETES_NAMESPACE:  yakd-dashboard (v1:metadata.namespace)
HOSTNAME:              yakd-dashboard-5ff678cb9-bl6xb (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4gxdn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4gxdn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  15m                   default-scheduler  Successfully assigned yakd-dashboard/yakd-dashboard-5ff678cb9-bl6xb to addons-214022
Normal   Pulling    11m (x5 over 15m)     kubelet            Pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed     11m (x5 over 14m)     kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": failed to pull and unpack image "docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/marcnuri/yakd/manifests/sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     11m (x5 over 14m)     kubelet            Error: ErrImagePull
Warning  Failed     4m33s (x42 over 14m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    8s (x62 over 14m)     kubelet            Back-off pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
addons_test.go:1047: (dbg) Run:  kubectl --context addons-214022 logs yakd-dashboard-5ff678cb9-bl6xb -n yakd-dashboard
addons_test.go:1047: (dbg) Non-zero exit: kubectl --context addons-214022 logs yakd-dashboard-5ff678cb9-bl6xb -n yakd-dashboard: exit status 1 (75.433138ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "yakd" in pod "yakd-dashboard-5ff678cb9-bl6xb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1047: kubectl --context addons-214022 logs yakd-dashboard-5ff678cb9-bl6xb -n yakd-dashboard: exit status 1
addons_test.go:1048: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Yakd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214022 -n addons-214022
helpers_test.go:252: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 logs -n 25: (1.476085882s)
helpers_test.go:260: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-130651 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                          │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ -o=json --download-only -p download-only-459703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                          │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-459703                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ --download-only -p binary-mirror-039949 --alsologtostderr --binary-mirror http://127.0.0.1:46019 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ -p binary-mirror-039949                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-039949 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ addons  │ enable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-214022                                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ start   │ -p addons-214022 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 14:02 UTC │
	│ addons  │ addons-214022 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ addons-214022 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ enable headlamp -p addons-214022 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:08 UTC │
	│ addons  │ addons-214022 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:08 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	│ addons  │ addons-214022 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-214022        │ jenkins │ v1.37.0 │ 13 Oct 25 14:09 UTC │ 13 Oct 25 14:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 13:55:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 13:55:20.628679 1815551 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:55:20.628995 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629006 1815551 out.go:374] Setting ErrFile to fd 2...
	I1013 13:55:20.629013 1815551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:20.629212 1815551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 13:55:20.629832 1815551 out.go:368] Setting JSON to false
	I1013 13:55:20.630822 1815551 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":20269,"bootTime":1760343452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:55:20.630927 1815551 start.go:141] virtualization: kvm guest
	I1013 13:55:20.633155 1815551 out.go:179] * [addons-214022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 13:55:20.634757 1815551 notify.go:220] Checking for updates...
	I1013 13:55:20.634845 1815551 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 13:55:20.636374 1815551 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:55:20.637880 1815551 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:55:20.639342 1815551 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:20.640732 1815551 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 13:55:20.642003 1815551 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 13:55:20.643600 1815551 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:55:20.674859 1815551 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 13:55:20.676415 1815551 start.go:305] selected driver: kvm2
	I1013 13:55:20.676432 1815551 start.go:925] validating driver "kvm2" against <nil>
	I1013 13:55:20.676444 1815551 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 13:55:20.677121 1815551 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.677210 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.691866 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.691903 1815551 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:20.705734 1815551 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:20.705799 1815551 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 13:55:20.706090 1815551 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:55:20.706122 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:20.706178 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:20.706190 1815551 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 13:55:20.706245 1815551 start.go:349] cluster config:
	{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:20.706362 1815551 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:20.708302 1815551 out.go:179] * Starting "addons-214022" primary control-plane node in "addons-214022" cluster
	I1013 13:55:20.709605 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:20.709652 1815551 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 13:55:20.709667 1815551 cache.go:58] Caching tarball of preloaded images
	I1013 13:55:20.709799 1815551 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 13:55:20.709812 1815551 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 13:55:20.710191 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:20.710220 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json: {Name:mkc10ba1ef1459bd83ba3e9e0ba7c33fe1be6a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:20.710388 1815551 start.go:360] acquireMachinesLock for addons-214022: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 13:55:20.710457 1815551 start.go:364] duration metric: took 51.101µs to acquireMachinesLock for "addons-214022"
	I1013 13:55:20.710480 1815551 start.go:93] Provisioning new machine with config: &{Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:55:20.710555 1815551 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 13:55:20.713031 1815551 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1013 13:55:20.713207 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:55:20.713262 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:55:20.727020 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I1013 13:55:20.727515 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:55:20.728150 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:55:20.728183 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:55:20.728607 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:55:20.728846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:20.729028 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:20.729259 1815551 start.go:159] libmachine.API.Create for "addons-214022" (driver="kvm2")
	I1013 13:55:20.729295 1815551 client.go:168] LocalClient.Create starting
	I1013 13:55:20.729337 1815551 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem
	I1013 13:55:20.759138 1815551 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem
	I1013 13:55:21.004098 1815551 main.go:141] libmachine: Running pre-create checks...
	I1013 13:55:21.004128 1815551 main.go:141] libmachine: (addons-214022) Calling .PreCreateCheck
	I1013 13:55:21.004821 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:21.005397 1815551 main.go:141] libmachine: Creating machine...
	I1013 13:55:21.005413 1815551 main.go:141] libmachine: (addons-214022) Calling .Create
	I1013 13:55:21.005675 1815551 main.go:141] libmachine: (addons-214022) creating domain...
	I1013 13:55:21.005726 1815551 main.go:141] libmachine: (addons-214022) creating network...
	I1013 13:55:21.007263 1815551 main.go:141] libmachine: (addons-214022) DBG | found existing default network
	I1013 13:55:21.007531 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.007563 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>default</name>
	I1013 13:55:21.007591 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 13:55:21.007612 1815551 main.go:141] libmachine: (addons-214022) DBG |   <forward mode='nat'>
	I1013 13:55:21.007625 1815551 main.go:141] libmachine: (addons-214022) DBG |     <nat>
	I1013 13:55:21.007636 1815551 main.go:141] libmachine: (addons-214022) DBG |       <port start='1024' end='65535'/>
	I1013 13:55:21.007652 1815551 main.go:141] libmachine: (addons-214022) DBG |     </nat>
	I1013 13:55:21.007667 1815551 main.go:141] libmachine: (addons-214022) DBG |   </forward>
	I1013 13:55:21.007675 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 13:55:21.007684 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 13:55:21.007690 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 13:55:21.007709 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.007733 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 13:55:21.007742 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.007750 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.007756 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.007766 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008295 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.008109 1815579 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045c0}
	I1013 13:55:21.008354 1815551 main.go:141] libmachine: (addons-214022) DBG | defining private network:
	I1013 13:55:21.008379 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.008393 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.008408 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.008433 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.008451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.008458 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.008463 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.008471 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.008475 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.008480 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.008486 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.014811 1815551 main.go:141] libmachine: (addons-214022) DBG | creating private network mk-addons-214022 192.168.39.0/24...
	I1013 13:55:21.089953 1815551 main.go:141] libmachine: (addons-214022) DBG | private network mk-addons-214022 192.168.39.0/24 created
	I1013 13:55:21.090269 1815551 main.go:141] libmachine: (addons-214022) DBG | <network>
	I1013 13:55:21.090299 1815551 main.go:141] libmachine: (addons-214022) setting up store path in /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.090308 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>mk-addons-214022</name>
	I1013 13:55:21.090321 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>9289d330-dce4-4691-9e5d-0346b93e6814</uuid>
	I1013 13:55:21.090330 1815551 main.go:141] libmachine: (addons-214022) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1013 13:55:21.090340 1815551 main.go:141] libmachine: (addons-214022) DBG |   <mac address='52:54:00:03:10:f8'/>
	I1013 13:55:21.090351 1815551 main.go:141] libmachine: (addons-214022) DBG |   <dns enable='no'/>
	I1013 13:55:21.090359 1815551 main.go:141] libmachine: (addons-214022) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 13:55:21.090366 1815551 main.go:141] libmachine: (addons-214022) DBG |     <dhcp>
	I1013 13:55:21.090372 1815551 main.go:141] libmachine: (addons-214022) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 13:55:21.090379 1815551 main.go:141] libmachine: (addons-214022) DBG |     </dhcp>
	I1013 13:55:21.090384 1815551 main.go:141] libmachine: (addons-214022) DBG |   </ip>
	I1013 13:55:21.090402 1815551 main.go:141] libmachine: (addons-214022) DBG | </network>
	I1013 13:55:21.090414 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.090424 1815551 main.go:141] libmachine: (addons-214022) building disk image from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 13:55:21.090432 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.090246 1815579 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.090457 1815551 main.go:141] libmachine: (addons-214022) Downloading /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 13:55:21.389435 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.389286 1815579 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa...
	I1013 13:55:21.573462 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573304 1815579 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk...
	I1013 13:55:21.573488 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing magic tar header
	I1013 13:55:21.573505 1815551 main.go:141] libmachine: (addons-214022) DBG | Writing SSH key tar header
	I1013 13:55:21.573516 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:21.573436 1815579 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 ...
	I1013 13:55:21.573528 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022
	I1013 13:55:21.573596 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022 (perms=drwx------)
	I1013 13:55:21.573620 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines
	I1013 13:55:21.573632 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines (perms=drwxr-xr-x)
	I1013 13:55:21.573648 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube (perms=drwxr-xr-x)
	I1013 13:55:21.573659 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975 (perms=drwxrwxr-x)
	I1013 13:55:21.573667 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 13:55:21.573674 1815551 main.go:141] libmachine: (addons-214022) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 13:55:21.573684 1815551 main.go:141] libmachine: (addons-214022) defining domain...
	I1013 13:55:21.573701 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:21.573728 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975
	I1013 13:55:21.573769 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 13:55:21.573794 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home/jenkins
	I1013 13:55:21.573812 1815551 main.go:141] libmachine: (addons-214022) DBG | checking permissions on dir: /home
	I1013 13:55:21.573827 1815551 main.go:141] libmachine: (addons-214022) DBG | skipping /home - not owner
	I1013 13:55:21.574972 1815551 main.go:141] libmachine: (addons-214022) defining domain using XML: 
	I1013 13:55:21.574985 1815551 main.go:141] libmachine: (addons-214022) <domain type='kvm'>
	I1013 13:55:21.574990 1815551 main.go:141] libmachine: (addons-214022)   <name>addons-214022</name>
	I1013 13:55:21.575002 1815551 main.go:141] libmachine: (addons-214022)   <memory unit='MiB'>4096</memory>
	I1013 13:55:21.575009 1815551 main.go:141] libmachine: (addons-214022)   <vcpu>2</vcpu>
	I1013 13:55:21.575015 1815551 main.go:141] libmachine: (addons-214022)   <features>
	I1013 13:55:21.575023 1815551 main.go:141] libmachine: (addons-214022)     <acpi/>
	I1013 13:55:21.575032 1815551 main.go:141] libmachine: (addons-214022)     <apic/>
	I1013 13:55:21.575059 1815551 main.go:141] libmachine: (addons-214022)     <pae/>
	I1013 13:55:21.575077 1815551 main.go:141] libmachine: (addons-214022)   </features>
	I1013 13:55:21.575100 1815551 main.go:141] libmachine: (addons-214022)   <cpu mode='host-passthrough'>
	I1013 13:55:21.575110 1815551 main.go:141] libmachine: (addons-214022)   </cpu>
	I1013 13:55:21.575122 1815551 main.go:141] libmachine: (addons-214022)   <os>
	I1013 13:55:21.575132 1815551 main.go:141] libmachine: (addons-214022)     <type>hvm</type>
	I1013 13:55:21.575141 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='cdrom'/>
	I1013 13:55:21.575151 1815551 main.go:141] libmachine: (addons-214022)     <boot dev='hd'/>
	I1013 13:55:21.575162 1815551 main.go:141] libmachine: (addons-214022)     <bootmenu enable='no'/>
	I1013 13:55:21.575179 1815551 main.go:141] libmachine: (addons-214022)   </os>
	I1013 13:55:21.575186 1815551 main.go:141] libmachine: (addons-214022)   <devices>
	I1013 13:55:21.575192 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='cdrom'>
	I1013 13:55:21.575201 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.575208 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.575216 1815551 main.go:141] libmachine: (addons-214022)       <readonly/>
	I1013 13:55:21.575224 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575234 1815551 main.go:141] libmachine: (addons-214022)     <disk type='file' device='disk'>
	I1013 13:55:21.575251 1815551 main.go:141] libmachine: (addons-214022)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 13:55:21.575272 1815551 main.go:141] libmachine: (addons-214022)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.575286 1815551 main.go:141] libmachine: (addons-214022)       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.575296 1815551 main.go:141] libmachine: (addons-214022)     </disk>
	I1013 13:55:21.575307 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575317 1815551 main.go:141] libmachine: (addons-214022)       <source network='mk-addons-214022'/>
	I1013 13:55:21.575329 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575339 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575356 1815551 main.go:141] libmachine: (addons-214022)     <interface type='network'>
	I1013 13:55:21.575374 1815551 main.go:141] libmachine: (addons-214022)       <source network='default'/>
	I1013 13:55:21.575392 1815551 main.go:141] libmachine: (addons-214022)       <model type='virtio'/>
	I1013 13:55:21.575408 1815551 main.go:141] libmachine: (addons-214022)     </interface>
	I1013 13:55:21.575416 1815551 main.go:141] libmachine: (addons-214022)     <serial type='pty'>
	I1013 13:55:21.575422 1815551 main.go:141] libmachine: (addons-214022)       <target port='0'/>
	I1013 13:55:21.575433 1815551 main.go:141] libmachine: (addons-214022)     </serial>
	I1013 13:55:21.575443 1815551 main.go:141] libmachine: (addons-214022)     <console type='pty'>
	I1013 13:55:21.575453 1815551 main.go:141] libmachine: (addons-214022)       <target type='serial' port='0'/>
	I1013 13:55:21.575463 1815551 main.go:141] libmachine: (addons-214022)     </console>
	I1013 13:55:21.575475 1815551 main.go:141] libmachine: (addons-214022)     <rng model='virtio'>
	I1013 13:55:21.575486 1815551 main.go:141] libmachine: (addons-214022)       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.575495 1815551 main.go:141] libmachine: (addons-214022)     </rng>
	I1013 13:55:21.575507 1815551 main.go:141] libmachine: (addons-214022)   </devices>
	I1013 13:55:21.575519 1815551 main.go:141] libmachine: (addons-214022) </domain>
	I1013 13:55:21.575530 1815551 main.go:141] libmachine: (addons-214022) 
	I1013 13:55:21.580981 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:54:97:7f in network default
	I1013 13:55:21.581682 1815551 main.go:141] libmachine: (addons-214022) starting domain...
	I1013 13:55:21.581698 1815551 main.go:141] libmachine: (addons-214022) ensuring networks are active...
	I1013 13:55:21.581746 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:21.582514 1815551 main.go:141] libmachine: (addons-214022) Ensuring network default is active
	I1013 13:55:21.583076 1815551 main.go:141] libmachine: (addons-214022) Ensuring network mk-addons-214022 is active
	I1013 13:55:21.583880 1815551 main.go:141] libmachine: (addons-214022) getting domain XML...
	I1013 13:55:21.585201 1815551 main.go:141] libmachine: (addons-214022) DBG | starting domain XML:
	I1013 13:55:21.585220 1815551 main.go:141] libmachine: (addons-214022) DBG | <domain type='kvm'>
	I1013 13:55:21.585231 1815551 main.go:141] libmachine: (addons-214022) DBG |   <name>addons-214022</name>
	I1013 13:55:21.585241 1815551 main.go:141] libmachine: (addons-214022) DBG |   <uuid>c368161c-2753-46d2-a9ea-3f8a7f4ac862</uuid>
	I1013 13:55:21.585272 1815551 main.go:141] libmachine: (addons-214022) DBG |   <memory unit='KiB'>4194304</memory>
	I1013 13:55:21.585285 1815551 main.go:141] libmachine: (addons-214022) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1013 13:55:21.585295 1815551 main.go:141] libmachine: (addons-214022) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 13:55:21.585304 1815551 main.go:141] libmachine: (addons-214022) DBG |   <os>
	I1013 13:55:21.585317 1815551 main.go:141] libmachine: (addons-214022) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 13:55:21.585324 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='cdrom'/>
	I1013 13:55:21.585329 1815551 main.go:141] libmachine: (addons-214022) DBG |     <boot dev='hd'/>
	I1013 13:55:21.585345 1815551 main.go:141] libmachine: (addons-214022) DBG |     <bootmenu enable='no'/>
	I1013 13:55:21.585358 1815551 main.go:141] libmachine: (addons-214022) DBG |   </os>
	I1013 13:55:21.585369 1815551 main.go:141] libmachine: (addons-214022) DBG |   <features>
	I1013 13:55:21.585391 1815551 main.go:141] libmachine: (addons-214022) DBG |     <acpi/>
	I1013 13:55:21.585403 1815551 main.go:141] libmachine: (addons-214022) DBG |     <apic/>
	I1013 13:55:21.585411 1815551 main.go:141] libmachine: (addons-214022) DBG |     <pae/>
	I1013 13:55:21.585428 1815551 main.go:141] libmachine: (addons-214022) DBG |   </features>
	I1013 13:55:21.585439 1815551 main.go:141] libmachine: (addons-214022) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 13:55:21.585443 1815551 main.go:141] libmachine: (addons-214022) DBG |   <clock offset='utc'/>
	I1013 13:55:21.585451 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 13:55:21.585456 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_reboot>restart</on_reboot>
	I1013 13:55:21.585464 1815551 main.go:141] libmachine: (addons-214022) DBG |   <on_crash>destroy</on_crash>
	I1013 13:55:21.585467 1815551 main.go:141] libmachine: (addons-214022) DBG |   <devices>
	I1013 13:55:21.585476 1815551 main.go:141] libmachine: (addons-214022) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 13:55:21.585483 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='cdrom'>
	I1013 13:55:21.585490 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw'/>
	I1013 13:55:21.585499 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/boot2docker.iso'/>
	I1013 13:55:21.585530 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 13:55:21.585553 1815551 main.go:141] libmachine: (addons-214022) DBG |       <readonly/>
	I1013 13:55:21.585566 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 13:55:21.585582 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585595 1815551 main.go:141] libmachine: (addons-214022) DBG |     <disk type='file' device='disk'>
	I1013 13:55:21.585608 1815551 main.go:141] libmachine: (addons-214022) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 13:55:21.585626 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/addons-214022.rawdisk'/>
	I1013 13:55:21.585638 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target dev='hda' bus='virtio'/>
	I1013 13:55:21.585652 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 13:55:21.585666 1815551 main.go:141] libmachine: (addons-214022) DBG |     </disk>
	I1013 13:55:21.585680 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 13:55:21.585693 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 13:55:21.585706 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585726 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 13:55:21.585741 1815551 main.go:141] libmachine: (addons-214022) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 13:55:21.585760 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 13:55:21.585769 1815551 main.go:141] libmachine: (addons-214022) DBG |     </controller>
	I1013 13:55:21.585773 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585778 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:45:c6:7b'/>
	I1013 13:55:21.585783 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='mk-addons-214022'/>
	I1013 13:55:21.585787 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585793 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 13:55:21.585797 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585801 1815551 main.go:141] libmachine: (addons-214022) DBG |     <interface type='network'>
	I1013 13:55:21.585806 1815551 main.go:141] libmachine: (addons-214022) DBG |       <mac address='52:54:00:54:97:7f'/>
	I1013 13:55:21.585810 1815551 main.go:141] libmachine: (addons-214022) DBG |       <source network='default'/>
	I1013 13:55:21.585815 1815551 main.go:141] libmachine: (addons-214022) DBG |       <model type='virtio'/>
	I1013 13:55:21.585823 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 13:55:21.585828 1815551 main.go:141] libmachine: (addons-214022) DBG |     </interface>
	I1013 13:55:21.585834 1815551 main.go:141] libmachine: (addons-214022) DBG |     <serial type='pty'>
	I1013 13:55:21.585840 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='isa-serial' port='0'>
	I1013 13:55:21.585849 1815551 main.go:141] libmachine: (addons-214022) DBG |         <model name='isa-serial'/>
	I1013 13:55:21.585856 1815551 main.go:141] libmachine: (addons-214022) DBG |       </target>
	I1013 13:55:21.585860 1815551 main.go:141] libmachine: (addons-214022) DBG |     </serial>
	I1013 13:55:21.585867 1815551 main.go:141] libmachine: (addons-214022) DBG |     <console type='pty'>
	I1013 13:55:21.585871 1815551 main.go:141] libmachine: (addons-214022) DBG |       <target type='serial' port='0'/>
	I1013 13:55:21.585878 1815551 main.go:141] libmachine: (addons-214022) DBG |     </console>
	I1013 13:55:21.585882 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='mouse' bus='ps2'/>
	I1013 13:55:21.585888 1815551 main.go:141] libmachine: (addons-214022) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 13:55:21.585895 1815551 main.go:141] libmachine: (addons-214022) DBG |     <audio id='1' type='none'/>
	I1013 13:55:21.585900 1815551 main.go:141] libmachine: (addons-214022) DBG |     <memballoon model='virtio'>
	I1013 13:55:21.585905 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 13:55:21.585912 1815551 main.go:141] libmachine: (addons-214022) DBG |     </memballoon>
	I1013 13:55:21.585920 1815551 main.go:141] libmachine: (addons-214022) DBG |     <rng model='virtio'>
	I1013 13:55:21.585937 1815551 main.go:141] libmachine: (addons-214022) DBG |       <backend model='random'>/dev/random</backend>
	I1013 13:55:21.585942 1815551 main.go:141] libmachine: (addons-214022) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 13:55:21.585947 1815551 main.go:141] libmachine: (addons-214022) DBG |     </rng>
	I1013 13:55:21.585950 1815551 main.go:141] libmachine: (addons-214022) DBG |   </devices>
	I1013 13:55:21.585955 1815551 main.go:141] libmachine: (addons-214022) DBG | </domain>
	I1013 13:55:21.585958 1815551 main.go:141] libmachine: (addons-214022) DBG | 
	I1013 13:55:21.998506 1815551 main.go:141] libmachine: (addons-214022) waiting for domain to start...
	I1013 13:55:21.999992 1815551 main.go:141] libmachine: (addons-214022) domain is now running
	I1013 13:55:22.000011 1815551 main.go:141] libmachine: (addons-214022) waiting for IP...
	I1013 13:55:22.000803 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.001255 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.001289 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.001544 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.001627 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.001556 1815579 retry.go:31] will retry after 233.588452ms: waiting for domain to come up
	I1013 13:55:22.236968 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.237478 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.237508 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.237876 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.237928 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.237848 1815579 retry.go:31] will retry after 300.8157ms: waiting for domain to come up
	I1013 13:55:22.540639 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.541271 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.541302 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.541621 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.541683 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.541605 1815579 retry.go:31] will retry after 377.651668ms: waiting for domain to come up
	I1013 13:55:22.921184 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:22.921783 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:22.921814 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:22.922148 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:22.922237 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:22.922151 1815579 retry.go:31] will retry after 510.251488ms: waiting for domain to come up
	I1013 13:55:23.433846 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:23.434356 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:23.434384 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:23.434622 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:23.434651 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:23.434592 1815579 retry.go:31] will retry after 738.765721ms: waiting for domain to come up
	I1013 13:55:24.174730 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:24.175220 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:24.175261 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:24.175609 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:24.175645 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:24.175615 1815579 retry.go:31] will retry after 941.377797ms: waiting for domain to come up
	I1013 13:55:25.118416 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.119134 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.119161 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.119505 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.119531 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.119464 1815579 retry.go:31] will retry after 715.698221ms: waiting for domain to come up
	I1013 13:55:25.837169 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:25.837602 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:25.837632 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:25.837919 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:25.837956 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:25.837912 1815579 retry.go:31] will retry after 1.477632519s: waiting for domain to come up
	I1013 13:55:27.317869 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:27.318416 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:27.318445 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:27.318730 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:27.318828 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:27.318742 1815579 retry.go:31] will retry after 1.752025896s: waiting for domain to come up
	I1013 13:55:29.072255 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:29.072804 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:29.072827 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:29.073152 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:29.073218 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:29.073146 1815579 retry.go:31] will retry after 1.890403935s: waiting for domain to come up
	I1013 13:55:30.965205 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:30.965861 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:30.965889 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:30.966181 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:30.966249 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:30.966169 1815579 retry.go:31] will retry after 2.015469115s: waiting for domain to come up
	I1013 13:55:32.984641 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:32.985205 1815551 main.go:141] libmachine: (addons-214022) DBG | no network interface addresses found for domain addons-214022 (source=lease)
	I1013 13:55:32.985254 1815551 main.go:141] libmachine: (addons-214022) DBG | trying to list again with source=arp
	I1013 13:55:32.985538 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find current IP address of domain addons-214022 in network mk-addons-214022 (interfaces detected: [])
	I1013 13:55:32.985566 1815551 main.go:141] libmachine: (addons-214022) DBG | I1013 13:55:32.985483 1815579 retry.go:31] will retry after 3.162648802s: waiting for domain to come up
	I1013 13:55:36.149428 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150058 1815551 main.go:141] libmachine: (addons-214022) found domain IP: 192.168.39.214
	I1013 13:55:36.150084 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has current primary IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.150093 1815551 main.go:141] libmachine: (addons-214022) reserving static IP address...
	I1013 13:55:36.150509 1815551 main.go:141] libmachine: (addons-214022) DBG | unable to find host DHCP lease matching {name: "addons-214022", mac: "52:54:00:45:c6:7b", ip: "192.168.39.214"} in network mk-addons-214022
	I1013 13:55:36.359631 1815551 main.go:141] libmachine: (addons-214022) DBG | Getting to WaitForSSH function...
	I1013 13:55:36.359656 1815551 main.go:141] libmachine: (addons-214022) reserved static IP address 192.168.39.214 for domain addons-214022
	I1013 13:55:36.359708 1815551 main.go:141] libmachine: (addons-214022) waiting for SSH...
	I1013 13:55:36.362970 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363545 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.363578 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.363975 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH client type: external
	I1013 13:55:36.364005 1815551 main.go:141] libmachine: (addons-214022) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa (-rw-------)
	I1013 13:55:36.364071 1815551 main.go:141] libmachine: (addons-214022) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 13:55:36.364096 1815551 main.go:141] libmachine: (addons-214022) DBG | About to run SSH command:
	I1013 13:55:36.364112 1815551 main.go:141] libmachine: (addons-214022) DBG | exit 0
	I1013 13:55:36.500938 1815551 main.go:141] libmachine: (addons-214022) DBG | SSH cmd err, output: <nil>: 
	I1013 13:55:36.501251 1815551 main.go:141] libmachine: (addons-214022) domain creation complete
	I1013 13:55:36.501689 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:36.502339 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502549 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:36.502694 1815551 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1013 13:55:36.502705 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:55:36.504172 1815551 main.go:141] libmachine: Detecting operating system of created instance...
	I1013 13:55:36.504188 1815551 main.go:141] libmachine: Waiting for SSH to be available...
	I1013 13:55:36.504195 1815551 main.go:141] libmachine: Getting to WaitForSSH function...
	I1013 13:55:36.504201 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.507156 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507596 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.507626 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.507811 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.508003 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508123 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.508334 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.508503 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.508771 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.508786 1815551 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1013 13:55:36.609679 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.609706 1815551 main.go:141] libmachine: Detecting the provisioner...
	I1013 13:55:36.609725 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.612870 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613343 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.613380 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.613602 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.613846 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614017 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.614155 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.614343 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.614556 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.614568 1815551 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1013 13:55:36.717397 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1013 13:55:36.717477 1815551 main.go:141] libmachine: found compatible host: buildroot
	I1013 13:55:36.717487 1815551 main.go:141] libmachine: Provisioning with buildroot...
	I1013 13:55:36.717495 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.717788 1815551 buildroot.go:166] provisioning hostname "addons-214022"
	I1013 13:55:36.717829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.718042 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.721497 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.721988 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.722027 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.722260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.722429 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722542 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.722660 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.722864 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.723104 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.723120 1815551 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214022 && echo "addons-214022" | sudo tee /etc/hostname
	I1013 13:55:36.853529 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214022
	
	I1013 13:55:36.853563 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.856617 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857071 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.857100 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.857320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:36.857522 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:36.857852 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:36.858072 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:36.858351 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:36.858378 1815551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214022' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214022/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214022' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 13:55:36.978438 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 13:55:36.978492 1815551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 13:55:36.978561 1815551 buildroot.go:174] setting up certificates
	I1013 13:55:36.978581 1815551 provision.go:84] configureAuth start
	I1013 13:55:36.978601 1815551 main.go:141] libmachine: (addons-214022) Calling .GetMachineName
	I1013 13:55:36.978932 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:36.982111 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982557 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.982587 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.982769 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:36.985722 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986132 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:36.986153 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:36.986337 1815551 provision.go:143] copyHostCerts
	I1013 13:55:36.986421 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 13:55:36.986610 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 13:55:36.986700 1815551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 13:55:36.986789 1815551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.addons-214022 san=[127.0.0.1 192.168.39.214 addons-214022 localhost minikube]
	I1013 13:55:37.044634 1815551 provision.go:177] copyRemoteCerts
	I1013 13:55:37.044706 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 13:55:37.044750 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.047881 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048238 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.048268 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.048531 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.048757 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.048938 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.049093 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.132357 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 13:55:37.163230 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 13:55:37.193519 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 13:55:37.228041 1815551 provision.go:87] duration metric: took 249.44117ms to configureAuth
	I1013 13:55:37.228073 1815551 buildroot.go:189] setting minikube options for container-runtime
	I1013 13:55:37.228284 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:55:37.228308 1815551 main.go:141] libmachine: Checking connection to Docker...
	I1013 13:55:37.228319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetURL
	I1013 13:55:37.229621 1815551 main.go:141] libmachine: (addons-214022) DBG | using libvirt version 8000000
	I1013 13:55:37.231977 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232573 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.232594 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.232944 1815551 main.go:141] libmachine: Docker is up and running!
	I1013 13:55:37.232959 1815551 main.go:141] libmachine: Reticulating splines...
	I1013 13:55:37.232967 1815551 client.go:171] duration metric: took 16.503662992s to LocalClient.Create
	I1013 13:55:37.232989 1815551 start.go:167] duration metric: took 16.503732898s to libmachine.API.Create "addons-214022"
	I1013 13:55:37.232996 1815551 start.go:293] postStartSetup for "addons-214022" (driver="kvm2")
	I1013 13:55:37.233004 1815551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 13:55:37.233019 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.233334 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 13:55:37.233364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.236079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236495 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.236524 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.236672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.237136 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.237319 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.237840 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.320344 1815551 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 13:55:37.325903 1815551 info.go:137] Remote host: Buildroot 2025.02
	I1013 13:55:37.325945 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 13:55:37.326098 1815551 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 13:55:37.326125 1815551 start.go:296] duration metric: took 93.124024ms for postStartSetup
	I1013 13:55:37.326165 1815551 main.go:141] libmachine: (addons-214022) Calling .GetConfigRaw
	I1013 13:55:37.326907 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.329757 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330258 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.330288 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.330612 1815551 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/config.json ...
	I1013 13:55:37.330830 1815551 start.go:128] duration metric: took 16.620261949s to createHost
	I1013 13:55:37.330855 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.334094 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334644 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.334674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.334903 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.335118 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335320 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.335505 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.335738 1815551 main.go:141] libmachine: Using SSH client type: native
	I1013 13:55:37.336080 1815551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1013 13:55:37.336099 1815551 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 13:55:37.453534 1815551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760363737.403582342
	
	I1013 13:55:37.453567 1815551 fix.go:216] guest clock: 1760363737.403582342
	I1013 13:55:37.453576 1815551 fix.go:229] Guest: 2025-10-13 13:55:37.403582342 +0000 UTC Remote: 2025-10-13 13:55:37.33084379 +0000 UTC m=+16.741419072 (delta=72.738552ms)
	I1013 13:55:37.453601 1815551 fix.go:200] guest clock delta is within tolerance: 72.738552ms
	I1013 13:55:37.453614 1815551 start.go:83] releasing machines lock for "addons-214022", held for 16.74313679s
	I1013 13:55:37.453644 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.453996 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:37.457079 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457464 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.457495 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.457681 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458199 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458359 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:55:37.458457 1815551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 13:55:37.458521 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.458571 1815551 ssh_runner.go:195] Run: cat /version.json
	I1013 13:55:37.458594 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:55:37.461592 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462001 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462030 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462059 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462230 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.462419 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.462580 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.462613 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:37.462638 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:37.462750 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.462894 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:55:37.463074 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:55:37.463216 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:55:37.463355 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:55:37.568362 1815551 ssh_runner.go:195] Run: systemctl --version
	I1013 13:55:37.574961 1815551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 13:55:37.581570 1815551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 13:55:37.581652 1815551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 13:55:37.601744 1815551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 13:55:37.601771 1815551 start.go:495] detecting cgroup driver to use...
	I1013 13:55:37.601855 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 13:55:37.636399 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 13:55:37.653284 1815551 docker.go:218] disabling cri-docker service (if available) ...
	I1013 13:55:37.653349 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 13:55:37.671035 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 13:55:37.687997 1815551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 13:55:37.835046 1815551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 13:55:38.036660 1815551 docker.go:234] disabling docker service ...
	I1013 13:55:38.036785 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 13:55:38.054634 1815551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 13:55:38.070992 1815551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 13:55:38.226219 1815551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 13:55:38.375133 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 13:55:38.391629 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 13:55:38.415622 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 13:55:38.428382 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 13:55:38.441166 1815551 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 13:55:38.441271 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 13:55:38.454185 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.467219 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 13:55:38.480016 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 13:55:38.493623 1815551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 13:55:38.507533 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 13:55:38.520643 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 13:55:38.533755 1815551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 13:55:38.546971 1815551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 13:55:38.557881 1815551 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 13:55:38.557958 1815551 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 13:55:38.578224 1815551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 13:55:38.590424 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:38.732726 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:38.770576 1815551 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 13:55:38.770707 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:38.776353 1815551 retry.go:31] will retry after 1.261164496s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 13:55:40.038886 1815551 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 13:55:40.045830 1815551 start.go:563] Will wait 60s for crictl version
	I1013 13:55:40.045914 1815551 ssh_runner.go:195] Run: which crictl
	I1013 13:55:40.050941 1815551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 13:55:40.093318 1815551 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 13:55:40.093432 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.123924 1815551 ssh_runner.go:195] Run: containerd --version
	I1013 13:55:40.255787 1815551 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 13:55:40.331568 1815551 main.go:141] libmachine: (addons-214022) Calling .GetIP
	I1013 13:55:40.334892 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335313 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:55:40.335337 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:55:40.335632 1815551 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 13:55:40.341286 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:40.357723 1815551 kubeadm.go:883] updating cluster {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 13:55:40.357874 1815551 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 13:55:40.357947 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:40.395630 1815551 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 13:55:40.395736 1815551 ssh_runner.go:195] Run: which lz4
	I1013 13:55:40.400778 1815551 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 13:55:40.406306 1815551 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 13:55:40.406344 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (409015552 bytes)
	I1013 13:55:41.943253 1815551 containerd.go:563] duration metric: took 1.54249613s to copy over tarball
	I1013 13:55:41.943351 1815551 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 13:55:43.492564 1815551 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.549175583s)
	I1013 13:55:43.492596 1815551 containerd.go:570] duration metric: took 1.549300388s to extract the tarball
	I1013 13:55:43.492604 1815551 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 13:55:43.534655 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:43.680421 1815551 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 13:55:43.727538 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.770225 1815551 retry.go:31] will retry after 129.297012ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T13:55:43Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1013 13:55:43.900675 1815551 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 13:55:43.942782 1815551 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 13:55:43.942818 1815551 cache_images.go:85] Images are preloaded, skipping loading
	I1013 13:55:43.942831 1815551 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.34.1 containerd true true} ...
	I1013 13:55:43.942973 1815551 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214022 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 13:55:43.943036 1815551 ssh_runner.go:195] Run: sudo crictl info
	I1013 13:55:43.983500 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:43.983527 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:43.983547 1815551 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 13:55:43.983572 1815551 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214022 NodeName:addons-214022 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 13:55:43.983683 1815551 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-214022"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.214"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 13:55:43.983786 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 13:55:43.997492 1815551 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 13:55:43.997569 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 13:55:44.009940 1815551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1013 13:55:44.032456 1815551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 13:55:44.055201 1815551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1013 13:55:44.077991 1815551 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I1013 13:55:44.082755 1815551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 13:55:44.102001 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:55:44.250454 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:55:44.271759 1815551 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022 for IP: 192.168.39.214
	I1013 13:55:44.271804 1815551 certs.go:195] generating shared ca certs ...
	I1013 13:55:44.271849 1815551 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.272058 1815551 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 13:55:44.515410 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt ...
	I1013 13:55:44.515443 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt: {Name:mk7e93844bf7a5315c584d29c143e2135009c9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515626 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key ...
	I1013 13:55:44.515639 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key: {Name:mk2370dd9470838be70f5ff73870ee78eaf49615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.515736 1815551 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 13:55:44.688770 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt ...
	I1013 13:55:44.688804 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt: {Name:mk17069980c2ce75e576b93cf8d09a188d68e9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.688989 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key ...
	I1013 13:55:44.689002 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key: {Name:mk6b5175fc3e29304600d26ae322daa345a1af96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:44.689075 1815551 certs.go:257] generating profile certs ...
	I1013 13:55:44.689137 1815551 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key
	I1013 13:55:44.689163 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt with IP's: []
	I1013 13:55:45.249037 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt ...
	I1013 13:55:45.249073 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: {Name:mk280462c7f89663f3ca7afb3f0492dd2b0ee285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249251 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key ...
	I1013 13:55:45.249263 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.key: {Name:mk559b21297b9d07a442f449010608571723a06a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.249350 1815551 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114
	I1013 13:55:45.249370 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I1013 13:55:45.485539 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 ...
	I1013 13:55:45.485568 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114: {Name:mkd1f4b4fe453f9f52532a7d0522a77f6292f9c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485740 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 ...
	I1013 13:55:45.485755 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114: {Name:mk7e630cb0d73800acc236df973e9041d684cea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.485833 1815551 certs.go:382] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt
	I1013 13:55:45.485922 1815551 certs.go:386] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key.8e072114 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key
	I1013 13:55:45.485980 1815551 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key
	I1013 13:55:45.485998 1815551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt with IP's: []
	I1013 13:55:45.781914 1815551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt ...
	I1013 13:55:45.781958 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt: {Name:mk2c046b91ab288417107efe4a8ee37eb796f0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782135 1815551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key ...
	I1013 13:55:45.782151 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key: {Name:mk11ba110c07b71583dc1e7a37e3c7830733bcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:55:45.782356 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 13:55:45.782394 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 13:55:45.782417 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 13:55:45.782439 1815551 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 13:55:45.783086 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 13:55:45.815352 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 13:55:45.846541 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 13:55:45.880232 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 13:55:45.924466 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 13:55:45.962160 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 13:55:45.999510 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 13:55:46.034971 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 13:55:46.068482 1815551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 13:55:46.099803 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 13:55:46.121270 1815551 ssh_runner.go:195] Run: openssl version
	I1013 13:55:46.128266 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 13:55:46.142449 1815551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148226 1815551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.148313 1815551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 13:55:46.155940 1815551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 13:55:46.170023 1815551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 13:55:46.175480 1815551 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 13:55:46.175554 1815551 kubeadm.go:400] StartCluster: {Name:addons-214022 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-214022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:46.175652 1815551 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 13:55:46.175759 1815551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 13:55:46.214377 1815551 cri.go:89] found id: ""
	I1013 13:55:46.214475 1815551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 13:55:46.227534 1815551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 13:55:46.239809 1815551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 13:55:46.253443 1815551 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 13:55:46.253466 1815551 kubeadm.go:157] found existing configuration files:
	
	I1013 13:55:46.253514 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 13:55:46.265630 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 13:55:46.265706 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 13:55:46.278450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 13:55:46.290243 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 13:55:46.290303 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 13:55:46.303207 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.315748 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 13:55:46.315819 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 13:55:46.328450 1815551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 13:55:46.340422 1815551 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 13:55:46.340491 1815551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 13:55:46.353088 1815551 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 13:55:46.409861 1815551 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 13:55:46.409939 1815551 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 13:55:46.510451 1815551 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 13:55:46.510548 1815551 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 13:55:46.510736 1815551 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 13:55:46.519844 1815551 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 13:55:46.532700 1815551 out.go:252]   - Generating certificates and keys ...
	I1013 13:55:46.532819 1815551 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 13:55:46.532896 1815551 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 13:55:46.783435 1815551 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 13:55:47.020350 1815551 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 13:55:47.775782 1815551 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 13:55:48.011804 1815551 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 13:55:48.461103 1815551 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 13:55:48.461301 1815551 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.750774 1815551 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 13:55:48.751132 1815551 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-214022 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1013 13:55:48.831944 1815551 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 13:55:49.085300 1815551 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 13:55:49.215416 1815551 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 13:55:49.215485 1815551 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 13:55:49.341619 1815551 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 13:55:49.552784 1815551 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 13:55:49.595942 1815551 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 13:55:49.670226 1815551 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 13:55:49.887570 1815551 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 13:55:49.888048 1815551 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 13:55:49.890217 1815551 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 13:55:49.891956 1815551 out.go:252]   - Booting up control plane ...
	I1013 13:55:49.892075 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 13:55:49.892175 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 13:55:49.892283 1815551 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 13:55:49.915573 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 13:55:49.915702 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 13:55:49.926506 1815551 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 13:55:49.926635 1815551 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 13:55:49.926699 1815551 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 13:55:50.104649 1815551 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 13:55:50.104896 1815551 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 13:55:51.105517 1815551 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001950535s
	I1013 13:55:51.110678 1815551 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 13:55:51.110781 1815551 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.214:8443/livez
	I1013 13:55:51.110862 1815551 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 13:55:51.110934 1815551 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 13:55:53.698826 1815551 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.589717518s
	I1013 13:55:54.571486 1815551 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.462849107s
	I1013 13:55:56.609645 1815551 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502421023s
	I1013 13:55:56.625086 1815551 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 13:55:56.642185 1815551 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 13:55:56.660063 1815551 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 13:55:56.660353 1815551 kubeadm.go:318] [mark-control-plane] Marking the node addons-214022 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 13:55:56.677664 1815551 kubeadm.go:318] [bootstrap-token] Using token: yho7iw.8cmp1omdihpr13ia
	I1013 13:55:56.680503 1815551 out.go:252]   - Configuring RBAC rules ...
	I1013 13:55:56.680644 1815551 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 13:55:56.691921 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 13:55:56.701832 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 13:55:56.706581 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 13:55:56.711586 1815551 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 13:55:56.720960 1815551 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 13:55:57.019012 1815551 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 13:55:57.510749 1815551 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 13:55:58.017894 1815551 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 13:55:58.019641 1815551 kubeadm.go:318] 
	I1013 13:55:58.019746 1815551 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 13:55:58.019759 1815551 kubeadm.go:318] 
	I1013 13:55:58.019856 1815551 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 13:55:58.019866 1815551 kubeadm.go:318] 
	I1013 13:55:58.019906 1815551 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 13:55:58.019991 1815551 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 13:55:58.020075 1815551 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 13:55:58.020087 1815551 kubeadm.go:318] 
	I1013 13:55:58.020135 1815551 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 13:55:58.020180 1815551 kubeadm.go:318] 
	I1013 13:55:58.020272 1815551 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 13:55:58.020284 1815551 kubeadm.go:318] 
	I1013 13:55:58.020355 1815551 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 13:55:58.020465 1815551 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 13:55:58.020560 1815551 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 13:55:58.020570 1815551 kubeadm.go:318] 
	I1013 13:55:58.020696 1815551 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 13:55:58.020841 1815551 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 13:55:58.020863 1815551 kubeadm.go:318] 
	I1013 13:55:58.021022 1815551 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021178 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa \
	I1013 13:55:58.021220 1815551 kubeadm.go:318] 	--control-plane 
	I1013 13:55:58.021227 1815551 kubeadm.go:318] 
	I1013 13:55:58.021356 1815551 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 13:55:58.021366 1815551 kubeadm.go:318] 
	I1013 13:55:58.021480 1815551 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yho7iw.8cmp1omdihpr13ia \
	I1013 13:55:58.021613 1815551 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa 
	I1013 13:55:58.023899 1815551 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 13:55:58.023930 1815551 cni.go:84] Creating CNI manager for ""
	I1013 13:55:58.023940 1815551 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:58.026381 1815551 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 13:55:58.028311 1815551 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 13:55:58.043778 1815551 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 13:55:58.076261 1815551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 13:55:58.076355 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.076389 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214022 minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=addons-214022 minikube.k8s.io/primary=true
	I1013 13:55:58.125421 1815551 ops.go:34] apiserver oom_adj: -16
	I1013 13:55:58.249972 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:58.750645 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.250461 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:55:59.750623 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.250758 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:00.750903 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.250112 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:01.750238 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.250999 1815551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 13:56:02.377634 1815551 kubeadm.go:1113] duration metric: took 4.301363742s to wait for elevateKubeSystemPrivileges
	I1013 13:56:02.377670 1815551 kubeadm.go:402] duration metric: took 16.202122758s to StartCluster
	I1013 13:56:02.377691 1815551 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.377852 1815551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:56:02.378374 1815551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 13:56:02.378641 1815551 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 13:56:02.378701 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 13:56:02.378727 1815551 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 13:56:02.378856 1815551 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214022"
	I1013 13:56:02.378871 1815551 addons.go:69] Setting yakd=true in profile "addons-214022"
	I1013 13:56:02.378888 1815551 addons.go:238] Setting addon yakd=true in "addons-214022"
	I1013 13:56:02.378915 1815551 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:02.378924 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378926 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.378954 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.378945 1815551 addons.go:69] Setting default-storageclass=true in profile "addons-214022"
	I1013 13:56:02.378942 1815551 addons.go:69] Setting gcp-auth=true in profile "addons-214022"
	I1013 13:56:02.378975 1815551 addons.go:69] Setting cloud-spanner=true in profile "addons-214022"
	I1013 13:56:02.378978 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214022"
	I1013 13:56:02.378963 1815551 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.378988 1815551 mustload.go:65] Loading cluster: addons-214022
	I1013 13:56:02.378999 1815551 addons.go:69] Setting registry=true in profile "addons-214022"
	I1013 13:56:02.379046 1815551 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-214022"
	I1013 13:56:02.379058 1815551 addons.go:238] Setting addon registry=true in "addons-214022"
	I1013 13:56:02.379079 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379103 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379214 1815551 config.go:182] Loaded profile config "addons-214022": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 13:56:02.379427 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.378987 1815551 addons.go:238] Setting addon cloud-spanner=true in "addons-214022"
	I1013 13:56:02.379425 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379478 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379483 1815551 addons.go:69] Setting storage-provisioner=true in profile "addons-214022"
	I1013 13:56:02.379488 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379497 1815551 addons.go:238] Setting addon storage-provisioner=true in "addons-214022"
	I1013 13:56:02.379503 1815551 addons.go:69] Setting ingress=true in profile "addons-214022"
	I1013 13:56:02.379519 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379522 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379532 1815551 addons.go:69] Setting ingress-dns=true in profile "addons-214022"
	I1013 13:56:02.379546 1815551 addons.go:238] Setting addon ingress-dns=true in "addons-214022"
	I1013 13:56:02.379575 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379616 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379653 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379682 1815551 addons.go:69] Setting volumesnapshots=true in profile "addons-214022"
	I1013 13:56:02.379814 1815551 addons.go:238] Setting addon volumesnapshots=true in "addons-214022"
	I1013 13:56:02.379879 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379926 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379490 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.379965 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379979 1815551 addons.go:69] Setting metrics-server=true in profile "addons-214022"
	I1013 13:56:02.379992 1815551 addons.go:238] Setting addon metrics-server=true in "addons-214022"
	I1013 13:56:02.380013 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.379520 1815551 addons.go:238] Setting addon ingress=true in "addons-214022"
	I1013 13:56:02.379924 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380064 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380076 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380107 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380112 1815551 addons.go:69] Setting inspektor-gadget=true in profile "addons-214022"
	I1013 13:56:02.380125 1815551 addons.go:238] Setting addon inspektor-gadget=true in "addons-214022"
	I1013 13:56:02.380158 1815551 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214022"
	I1013 13:56:02.380221 1815551 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-214022"
	I1013 13:56:02.380272 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380445 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380510 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.379699 1815551 addons.go:69] Setting volcano=true in profile "addons-214022"
	I1013 13:56:02.380559 1815551 addons.go:238] Setting addon volcano=true in "addons-214022"
	I1013 13:56:02.380613 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.380634 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380666 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380790 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.380832 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.380876 1815551 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214022"
	I1013 13:56:02.380894 1815551 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214022"
	I1013 13:56:02.379472 1815551 addons.go:69] Setting registry-creds=true in profile "addons-214022"
	I1013 13:56:02.381003 1815551 addons.go:238] Setting addon registry-creds=true in "addons-214022"
	I1013 13:56:02.381112 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.381265 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.381293 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.381341 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.382020 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.382057 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.382817 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.383259 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.383291 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384195 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384256 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.384286 1815551 out.go:179] * Verifying Kubernetes components...
	I1013 13:56:02.384291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.384732 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.384782 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.387093 1815551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 13:56:02.392106 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.392163 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.396083 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.396162 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.410131 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I1013 13:56:02.411431 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1013 13:56:02.412218 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.412918 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.412942 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.413748 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.414498 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.415229 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.415286 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.415822 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.415843 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.420030 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I1013 13:56:02.420041 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I1013 13:56:02.420259 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I1013 13:56:02.420298 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I1013 13:56:02.420346 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.420406 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I1013 13:56:02.420930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421041 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421071 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.421170 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.421581 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421600 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421753 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421769 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.421819 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.421832 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.422190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422264 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.422931 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.422976 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.423789 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.424161 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.424211 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.427224 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I1013 13:56:02.427390 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38041
	I1013 13:56:02.427782 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.427837 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428131 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.428460 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.428533 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.428569 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.428840 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.429601 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429621 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.429774 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.429786 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.430349 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430508 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.430777 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.430880 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431609 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.431937 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.431967 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.431989 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.432062 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432169 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.432395 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.432603 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.432771 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.433653 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.433706 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.433998 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.434042 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.434547 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32821
	I1013 13:56:02.441970 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1013 13:56:02.442071 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I1013 13:56:02.442458 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.442810 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.443536 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443557 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.443796 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.443813 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.444423 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.444487 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.445199 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.445303 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.445921 1815551 addons.go:238] Setting addon default-storageclass=true in "addons-214022"
	I1013 13:56:02.445974 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.446387 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.446430 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.447853 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1013 13:56:02.447930 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448413 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.448652 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.448673 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449317 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.449355 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.449911 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450071 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.450759 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.450802 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.452824 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1013 13:56:02.453268 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.453309 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.453388 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.453909 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.453944 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.454377 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.454945 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.455002 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.457726 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41047
	I1013 13:56:02.458946 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42105
	I1013 13:56:02.459841 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.460456 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.460471 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.460997 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.461059 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.461190 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.461893 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.462087 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.463029 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I1013 13:56:02.463622 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.464283 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.464301 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.467881 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.468766 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1013 13:56:02.468880 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.470158 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.470767 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.470785 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.471160 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
	I1013 13:56:02.471380 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.471463 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.471745 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.472514 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I1013 13:56:02.474011 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.474407 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.475349 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.475371 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.475936 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.477228 1815551 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-214022"
	I1013 13:56:02.477291 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:02.477704 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.477781 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.478540 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.478577 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.479296 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.479320 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.479338 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 13:56:02.479831 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.481287 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.482030 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.482191 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 13:56:02.482988 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I1013 13:56:02.482206 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.483218 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.483796 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.484400 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.484415 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.485053 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485131 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.485219 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 13:56:02.485513 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.485624 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.488111 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 13:56:02.489703 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 13:56:02.490084 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1013 13:56:02.490663 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.490763 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.491660 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43493
	I1013 13:56:02.491817 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492275 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.492498 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.492417 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.492699 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.492943 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 13:56:02.493252 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.493468 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.493280 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 13:56:02.493907 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1013 13:56:02.494093 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.494695 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.495079 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.495408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.497771 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 13:56:02.498011 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.499118 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 13:56:02.499863 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35551
	I1013 13:56:02.500453 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:02.500464 1815551 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 13:56:02.500482 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.501046 1815551 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:02.501426 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 13:56:02.501453 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502344 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 13:56:02.502360 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 13:56:02.502380 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502511 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:02.502523 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 13:56:02.502539 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.502551 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.503704 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 13:56:02.504519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.504549 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.504971 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1013 13:56:02.505044 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34753
	I1013 13:56:02.505476 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.505935 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.506132 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.506402 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 13:56:02.506420 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 13:56:02.506441 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.507553 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.507571 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.510588 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1013 13:56:02.511014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.512055 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.513064 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I1013 13:56:02.513461 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I1013 13:56:02.513806 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I1013 13:56:02.514065 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514237 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46643
	I1013 13:56:02.514353 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.514506 1815551 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.514759 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.514833 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.515238 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.515280 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.515776 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.516060 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516139 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.516152 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516158 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.516229 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I1013 13:56:02.516543 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:02.516614 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:02.516690 1815551 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 13:56:02.517007 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.517014 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517062 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.517467 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.517483 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.517559 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.517562 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
	I1013 13:56:02.518311 1815551 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:02.518369 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 13:56:02.518393 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.518516 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.518540 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.518655 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519402 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.519519 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.519628 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.519763 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.519831 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521182 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.521199 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I1013 13:56:02.521204 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521239 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.521254 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.521455 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.521645 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.521859 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.522155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.522227 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.525058 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.526886 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.526989 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.527062 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.527172 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.527481 1815551 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:02.527499 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1013 13:56:02.527538 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.527916 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528591 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.530285 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530450 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528734 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.530629 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.530633 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.528801 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.528997 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529220 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I1013 13:56:02.529385 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.529699 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.530894 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.530917 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.531013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.529988 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.531257 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531828 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.532069 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.532264 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.532540 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.532554 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.531749 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.533563 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 13:56:02.533622 1815551 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 13:56:02.533679 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535465 1815551 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 13:56:02.533809 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I1013 13:56:02.533885 1815551 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 13:56:02.533999 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.534123 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.534155 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.535733 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.535024 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.536159 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.536202 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.536302 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.537059 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.537168 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537279 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I1013 13:56:02.537305 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 13:56:02.537322 1815551 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 13:56:02.537342 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.537456 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.537805 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.537934 1815551 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:02.537945 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 13:56:02.537970 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538046 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 13:56:02.538056 1815551 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 13:56:02.538070 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.538169 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.538186 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.538982 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:02.539022 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 13:56:02.539053 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.540639 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.541678 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.541498 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.541528 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.542401 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.542692 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.541543 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.542639 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.542646 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.542566 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.543111 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.543500 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.544260 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.545374 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.545706 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.545773 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.546359 1815551 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 13:56:02.546363 1815551 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 13:56:02.546634 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.546830 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I1013 13:56:02.547953 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.547975 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.548147 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.548267 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.548438 1815551 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:02.548451 1815551 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 13:56:02.548473 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548649 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 13:56:02.548665 1815551 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 13:56:02.548684 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.548741 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:02.548751 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.548789 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 13:56:02.549764 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549774 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.549766 1815551 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 13:56:02.549808 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549829 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.549138 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549891 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.549914 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.549939 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.550519 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:02.550541 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:02.550650 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551438 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551458 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.551469 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.551478 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.551613 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.551695 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.551911 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.551979 1815551 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 13:56:02.552033 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552094 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.552921 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.552947 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.552922 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.552965 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.553027 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553037 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:02.553282 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.553338 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553396 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:02.553415 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.553448 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.553810 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.554101 1815551 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:02.554108 1815551 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 13:56:02.554116 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 13:56:02.554188 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.555708 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:02.555861 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 13:56:02.555886 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.555860 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.555999 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.556383 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.556783 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.557013 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.557193 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.558058 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.558134 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:02.559028 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.559068 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.559315 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.559492 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.559902 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.560012 1815551 out.go:179]   - Using image docker.io/busybox:stable
	I1013 13:56:02.560174 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.560282 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560454 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.560952 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561002 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561155 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.561186 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561489 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561674 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.561738 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.561760 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561891 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.561942 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562049 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.562133 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562208 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.562304 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.562325 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.562663 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.562854 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.563028 1815551 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 13:56:02.563073 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.563249 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:02.564627 1815551 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:02.564650 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 13:56:02.564672 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:02.568502 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569018 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:02.569056 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:02.569235 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:02.569424 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:02.569582 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:02.569725 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:03.342481 1815551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 13:56:03.342511 1815551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 13:56:03.415927 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 13:56:03.502503 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 13:56:03.509312 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 13:56:03.553702 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 13:56:03.553739 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 13:56:03.554436 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 13:56:03.554458 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 13:56:03.558285 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 13:56:03.558305 1815551 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 13:56:03.648494 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 13:56:03.699103 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 13:56:03.779563 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 13:56:03.812678 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 13:56:03.812733 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 13:56:03.829504 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 13:56:03.832700 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 13:56:03.897242 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 13:56:03.897268 1815551 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 13:56:03.905550 1815551 node_ready.go:35] waiting up to 6m0s for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909125 1815551 node_ready.go:49] node "addons-214022" is "Ready"
	I1013 13:56:03.909165 1815551 node_ready.go:38] duration metric: took 3.564505ms for node "addons-214022" to be "Ready" ...
	I1013 13:56:03.909180 1815551 api_server.go:52] waiting for apiserver process to appear ...
	I1013 13:56:03.909241 1815551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 13:56:03.957336 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1013 13:56:04.136232 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 13:56:04.201240 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 13:56:04.201271 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 13:56:04.228704 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 13:56:04.228758 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 13:56:04.287683 1815551 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.287738 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 13:56:04.507887 1815551 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:04.507919 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 13:56:04.641317 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 13:56:04.641349 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 13:56:04.710332 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 13:56:04.710378 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 13:56:04.712723 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 13:56:04.712755 1815551 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 13:56:04.822157 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:04.887676 1815551 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:04.887707 1815551 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 13:56:04.968928 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 13:56:05.069666 1815551 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 13:56:05.069709 1815551 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 13:56:05.164254 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 13:56:05.164289 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 13:56:05.171441 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 13:56:05.171470 1815551 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 13:56:05.278956 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 13:56:05.595927 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 13:56:05.595960 1815551 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 13:56:05.703182 1815551 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 13:56:05.703221 1815551 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 13:56:05.763510 1815551 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:05.763544 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 13:56:06.065261 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 13:56:06.086528 1815551 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.086558 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 13:56:06.241763 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 13:56:06.241791 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 13:56:06.468347 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:06.948294 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 13:56:06.948335 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 13:56:07.247516 1815551 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.904962804s)
	I1013 13:56:07.247565 1815551 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1013 13:56:07.247597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.83162272s)
	I1013 13:56:07.247662 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.247685 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248180 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248198 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.248211 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:07.248221 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:07.248546 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:07.248628 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:07.248648 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:07.509546 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 13:56:07.509581 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 13:56:07.797697 1815551 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214022" context rescaled to 1 replicas
	I1013 13:56:08.114046 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 13:56:08.114078 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 13:56:08.819818 1815551 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:08.819848 1815551 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 13:56:08.894448 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 13:56:09.954565 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 13:56:09.954611 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:09.959281 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.959849 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:09.959886 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:09.960116 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:09.960364 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:09.960569 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:09.960746 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:10.901573 1815551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 13:56:11.367882 1815551 addons.go:238] Setting addon gcp-auth=true in "addons-214022"
	I1013 13:56:11.367958 1815551 host.go:66] Checking if "addons-214022" exists ...
	I1013 13:56:11.368474 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.368530 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.384151 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I1013 13:56:11.384793 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.385376 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.385403 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.385815 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.386578 1815551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 13:56:11.386622 1815551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 13:56:11.401901 1815551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I1013 13:56:11.402499 1815551 main.go:141] libmachine: () Calling .GetVersion
	I1013 13:56:11.403178 1815551 main.go:141] libmachine: Using API Version  1
	I1013 13:56:11.403201 1815551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 13:56:11.403629 1815551 main.go:141] libmachine: () Calling .GetMachineName
	I1013 13:56:11.403840 1815551 main.go:141] libmachine: (addons-214022) Calling .GetState
	I1013 13:56:11.405902 1815551 main.go:141] libmachine: (addons-214022) Calling .DriverName
	I1013 13:56:11.406201 1815551 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 13:56:11.406233 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHHostname
	I1013 13:56:11.409331 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409779 1815551 main.go:141] libmachine: (addons-214022) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:c6:7b", ip: ""} in network mk-addons-214022: {Iface:virbr1 ExpiryTime:2025-10-13 14:55:36 +0000 UTC Type:0 Mac:52:54:00:45:c6:7b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-214022 Clientid:01:52:54:00:45:c6:7b}
	I1013 13:56:11.409810 1815551 main.go:141] libmachine: (addons-214022) DBG | domain addons-214022 has defined IP address 192.168.39.214 and MAC address 52:54:00:45:c6:7b in network mk-addons-214022
	I1013 13:56:11.409983 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHPort
	I1013 13:56:11.410205 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHKeyPath
	I1013 13:56:11.410408 1815551 main.go:141] libmachine: (addons-214022) Calling .GetSSHUsername
	I1013 13:56:11.410637 1815551 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/addons-214022/id_rsa Username:docker}
	I1013 13:56:13.559421 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.0568709s)
	I1013 13:56:13.559481 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559478 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (10.050128857s)
	I1013 13:56:13.559507 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.910967928s)
	I1013 13:56:13.559530 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559544 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559553 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.860416384s)
	I1013 13:56:13.559562 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559571 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559579 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559619 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.780022659s)
	I1013 13:56:13.559648 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559663 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559691 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.726948092s)
	I1013 13:56:13.559546 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559707 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.559728 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559764 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.730231108s)
	I1013 13:56:13.559493 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559784 1815551 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (9.650528788s)
	I1013 13:56:13.559801 1815551 api_server.go:72] duration metric: took 11.181129031s to wait for apiserver process to appear ...
	I1013 13:56:13.559808 1815551 api_server.go:88] waiting for apiserver healthz status ...
	I1013 13:56:13.559830 1815551 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I1013 13:56:13.559992 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560020 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560048 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560055 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560063 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560071 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560072 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560083 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560090 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560098 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.559785 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560331 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560332 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560338 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560345 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560391 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560394 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560400 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560407 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560410 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560412 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560425 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560447 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560450 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560456 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560461 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560464 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560467 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560491 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560508 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.560613 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560624 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560903 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.560967 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.560976 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.560987 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.560995 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.561056 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561078 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561085 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561188 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561210 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561237 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561243 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561445 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561453 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.561462 1815551 addons.go:479] Verifying addon ingress=true in "addons-214022"
	I1013 13:56:13.561689 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.561732 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.561739 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563431 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.563516 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.563493 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.564138 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.564155 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:13.564164 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.566500 1815551 out.go:179] * Verifying ingress addon...
	I1013 13:56:13.568872 1815551 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 13:56:13.679959 1815551 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I1013 13:56:13.701133 1815551 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 13:56:13.701173 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:13.713292 1815551 api_server.go:141] control plane version: v1.34.1
	I1013 13:56:13.713342 1815551 api_server.go:131] duration metric: took 153.525188ms to wait for apiserver health ...
	I1013 13:56:13.713357 1815551 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 13:56:13.839550 1815551 system_pods.go:59] 15 kube-system pods found
	I1013 13:56:13.839596 1815551 system_pods.go:61] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:13.839608 1815551 system_pods.go:61] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839614 1815551 system_pods.go:61] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:13.839621 1815551 system_pods.go:61] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:13.839626 1815551 system_pods.go:61] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:13.839631 1815551 system_pods.go:61] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:13.839643 1815551 system_pods.go:61] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:13.839649 1815551 system_pods.go:61] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:13.839655 1815551 system_pods.go:61] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:13.839662 1815551 system_pods.go:61] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:13.839676 1815551 system_pods.go:61] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:13.839684 1815551 system_pods.go:61] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:13.839690 1815551 system_pods.go:61] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:13.839698 1815551 system_pods.go:61] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:13.839701 1815551 system_pods.go:61] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:13.839708 1815551 system_pods.go:74] duration metric: took 126.345191ms to wait for pod list to return data ...
	I1013 13:56:13.839738 1815551 default_sa.go:34] waiting for default service account to be created ...
	I1013 13:56:13.942067 1815551 default_sa.go:45] found service account: "default"
	I1013 13:56:13.942106 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:13.942111 1815551 default_sa.go:55] duration metric: took 102.363552ms for default service account to be created ...
	I1013 13:56:13.942129 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:13.942130 1815551 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 13:56:13.942465 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:13.942473 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:13.942485 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:14.047220 1815551 system_pods.go:86] 15 kube-system pods found
	I1013 13:56:14.047259 1815551 system_pods.go:89] "amd-gpu-device-plugin-k6tpl" [35af7007-90fb-4693-b446-6d5b0c330c41] Running
	I1013 13:56:14.047272 1815551 system_pods.go:89] "coredns-66bc5c9577-5xlpv" [a264f9f2-5984-41fe-add8-9d6ebaed4f69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047280 1815551 system_pods.go:89] "coredns-66bc5c9577-h4thg" [8ac2f4c5-6c09-4497-b49b-8954e93044c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 13:56:14.047291 1815551 system_pods.go:89] "etcd-addons-214022" [ede48884-e63c-4714-850a-8c0c9297c9c1] Running
	I1013 13:56:14.047297 1815551 system_pods.go:89] "kube-apiserver-addons-214022" [06781741-6f8f-4114-825b-d3f3aa064df4] Running
	I1013 13:56:14.047303 1815551 system_pods.go:89] "kube-controller-manager-addons-214022" [3ee160a1-b911-452c-a2b0-bf3639979654] Running
	I1013 13:56:14.047311 1815551 system_pods.go:89] "kube-ingress-dns-minikube" [ea5bb1f4-d9a4-4505-8af3-f4a087e5e9ac] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 13:56:14.047316 1815551 system_pods.go:89] "kube-proxy-m9kg9" [f403dea2-6775-470f-b8ce-2aedd522afe9] Running
	I1013 13:56:14.047323 1815551 system_pods.go:89] "kube-scheduler-addons-214022" [74b43d38-d5a7-41aa-83ad-f42bce4a2f33] Running
	I1013 13:56:14.047333 1815551 system_pods.go:89] "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 13:56:14.047343 1815551 system_pods.go:89] "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 13:56:14.047360 1815551 system_pods.go:89] "registry-66898fdd98-qpt8q" [4a93c83e-03fe-4e05-909f-bd2339c6559f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 13:56:14.047368 1815551 system_pods.go:89] "registry-creds-764b6fb674-rsjlm" [3c1885cc-c9ac-48aa-bfe5-5873197f65f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 13:56:14.047377 1815551 system_pods.go:89] "registry-proxy-qdl2b" [664dea93-73bb-4760-9d08-e3736f1ccc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 13:56:14.047386 1815551 system_pods.go:89] "storage-provisioner" [275d8626-2352-401b-9be5-f5d385dcad13] Running
	I1013 13:56:14.047403 1815551 system_pods.go:126] duration metric: took 105.264628ms to wait for k8s-apps to be running ...
	I1013 13:56:14.047417 1815551 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 13:56:14.047478 1815551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 13:56:14.113581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:14.930679 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.130040 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:15.620233 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.296801 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:16.658297 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.084581 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:17.640914 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.131818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.760793 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:18.821597 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (14.86421149s)
	I1013 13:56:18.821631 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.685366971s)
	I1013 13:56:18.821668 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821748 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821787 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.821872 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (13.9996555s)
	W1013 13:56:18.821914 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821934 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (13.852967871s)
	I1013 13:56:18.821959 1815551 retry.go:31] will retry after 212.802499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:18.821975 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.821989 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822111 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.543120613s)
	I1013 13:56:18.822130 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822146 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822157 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822250 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822256 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822259 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822273 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822291 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822289 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822274 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.756980139s)
	I1013 13:56:18.822314 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822260 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822320 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822299 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822334 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822345 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822325 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822357 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822331 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822386 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822394 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.354009404s)
	W1013 13:56:18.822426 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822447 1815551 retry.go:31] will retry after 341.080561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 13:56:18.822631 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822646 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822660 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.822666 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822674 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.822684 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822691 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822702 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822726 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822801 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.822818 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.822890 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.928381136s)
	I1013 13:56:18.822936 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.822947 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823037 1815551 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.416805726s)
	I1013 13:56:18.822701 1815551 addons.go:479] Verifying addon registry=true in "addons-214022"
	I1013 13:56:18.823408 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823442 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823449 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823457 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.823463 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.823529 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.823549 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823554 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823563 1815551 addons.go:479] Verifying addon metrics-server=true in "addons-214022"
	I1013 13:56:18.823922 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.823939 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.823978 1815551 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.776478568s)
	I1013 13:56:18.826440 1815551 system_svc.go:56] duration metric: took 4.779015598s WaitForService to wait for kubelet
	I1013 13:56:18.826457 1815551 kubeadm.go:586] duration metric: took 16.447782815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 13:56:18.826480 1815551 node_conditions.go:102] verifying NodePressure condition ...
	I1013 13:56:18.824018 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.824271 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.826526 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.826549 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:18.826556 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:18.826909 1815551 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 13:56:18.827041 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:18.827056 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:18.827324 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:18.827349 1815551 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-214022"
	I1013 13:56:18.827631 1815551 out.go:179] * Verifying registry addon...
	I1013 13:56:18.827639 1815551 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214022 service yakd-dashboard -n yakd-dashboard
	
	I1013 13:56:18.828579 1815551 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 13:56:18.830389 1815551 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 13:56:18.830649 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 13:56:18.831072 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 13:56:18.831622 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 13:56:18.831641 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 13:56:18.904373 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 13:56:18.904404 1815551 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 13:56:18.958203 1815551 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 13:56:18.958240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:18.968879 1815551 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 13:56:18.968905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:18.980574 1815551 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:18.980605 1815551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 13:56:18.989659 1815551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 13:56:18.989692 1815551 node_conditions.go:123] node cpu capacity is 2
	I1013 13:56:18.989704 1815551 node_conditions.go:105] duration metric: took 163.213438ms to run NodePressure ...
	I1013 13:56:18.989726 1815551 start.go:241] waiting for startup goroutines ...
	I1013 13:56:19.035462 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:19.044517 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:19.044541 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:19.044887 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:19.044920 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:19.044937 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:19.076791 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 13:56:19.115345 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.164325 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 13:56:19.492227 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.492514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:19.578775 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:19.860209 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:19.860435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.075338 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.338880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.339590 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:20.591872 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:20.839272 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:20.840410 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.147212 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.341334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:21.342792 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.576751 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:21.816476 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.780960002s)
	W1013 13:56:21.816548 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816583 1815551 retry.go:31] will retry after 241.635364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:21.816594 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.739753765s)
	I1013 13:56:21.816659 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816682 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.816682 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.652313132s)
	I1013 13:56:21.816724 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.816742 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817049 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817064 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817072 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817094 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817135 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817206 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817222 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817231 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:56:21.817240 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:56:21.817331 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:56:21.817362 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817373 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.817637 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:56:21.817658 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:56:21.820100 1815551 addons.go:479] Verifying addon gcp-auth=true in "addons-214022"
	I1013 13:56:21.822251 1815551 out.go:179] * Verifying gcp-auth addon...
	I1013 13:56:21.824621 1815551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 13:56:21.835001 1815551 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 13:56:21.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:21.838795 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:21.840850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.059249 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:22.077627 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.330307 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.336339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.337042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:22.574406 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:22.832108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:22.838566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:22.838826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 13:56:22.914754 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:22.914802 1815551 retry.go:31] will retry after 760.892054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:23.073359 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.329443 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.336062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:23.336518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.576107 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:23.676911 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:23.852063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:23.852111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:23.852394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.075386 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:24.331600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.340818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:24.343374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.572818 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:24.620054 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.620094 1815551 retry.go:31] will retry after 1.157322101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:24.831852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:24.836023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:24.836880 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.073842 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.328390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.335179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:25.337258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.650194 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:25.777621 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:25.840280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:25.846148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:25.847000 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.073966 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:26.329927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.335473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.335806 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:26.575967 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:56:26.717807 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.717838 1815551 retry.go:31] will retry after 1.353453559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:26.828801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:26.834019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:26.836503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.073185 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.329339 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.337730 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.338165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:27.576514 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:27.828768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:27.835828 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:27.836163 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.071440 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:28.372264 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.372321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.373313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:28.374357 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.576799 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:28.830178 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:28.839906 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:28.841861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1013 13:56:29.026067 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.026119 1815551 retry.go:31] will retry after 2.314368666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:29.075636 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.331372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.334421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:29.336311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.574567 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:29.828489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:29.836190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:29.836214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.073854 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.328358 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.337153 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:30.572800 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:30.829360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:30.836930 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:30.838278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.115447 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.341310 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:31.386485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.389205 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:31.390131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.594587 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:31.838151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:31.859495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:31.859525 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.074372 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.329175 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.337700 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.340721 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:32.450731 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109365647s)
	W1013 13:56:32.450775 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.450795 1815551 retry.go:31] will retry after 3.150290355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:32.578006 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:32.830600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:32.835361 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:32.837984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.072132 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.330611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.336957 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.338768 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:33.576304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:33.832311 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:33.837282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:33.839687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.073260 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.328435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.335455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.338454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:34.573208 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:34.829194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:34.836540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:34.838519 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.073549 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.329626 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:35.336677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.573553 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:35.601692 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:35.833491 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:35.847288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:35.853015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.073279 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.332575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.339486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.345783 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.575174 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:36.831613 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:36.838390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:36.839346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:36.873620 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.271867515s)
	W1013 13:56:36.873678 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:36.873707 1815551 retry.go:31] will retry after 2.895058592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:37.073691 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.328849 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.335191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.337850 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:37.572952 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:37.830399 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:37.834346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:37.835091 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.074246 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.329068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.334746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:38.336761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.574900 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:38.829389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:38.836693 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:38.838345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.073278 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.329302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.339598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.340006 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:39.572295 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:39.769464 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:39.829653 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:39.836342 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:39.836508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.073770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.329739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.334329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.336269 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.691416 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:40.831148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:40.837541 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:40.839843 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:40.983908 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.214399822s)
	W1013 13:56:40.983958 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:40.983985 1815551 retry.go:31] will retry after 7.225185704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:41.073163 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.329997 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.335409 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.338433 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:41.666422 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:41.829493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:41.835176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:41.835834 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.072985 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.330254 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.339275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.343430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:42.574234 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:42.831039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:42.835619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:42.838197 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.072757 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.328191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.337547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:43.337556 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.573563 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:43.840684 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:43.842458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:43.848748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.073791 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.328352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.335902 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.337655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:44.575764 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:44.834421 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:44.839189 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:44.844388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.073743 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.328774 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.336100 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:45.336438 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.601555 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:45.830165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:45.835830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:45.838487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.074421 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.328961 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.334499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.335387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:46.574665 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:46.829543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:46.835535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:46.837472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.076871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.328763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.335050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:47.337454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.572647 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:47.829879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:47.834618 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:47.837273 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.082833 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.210068 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:48.329748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.336813 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:48.339418 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.577288 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:48.957818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:48.960308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:48.964374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.076388 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.310522 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.100404712s)
	W1013 13:56:49.310569 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.310590 1815551 retry.go:31] will retry after 8.278511579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:49.333318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.335452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.338043 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:49.577394 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:49.830452 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:49.835251 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:49.837381 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.073417 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.329558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:50.339077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.574733 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:50.830760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:50.835530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:50.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.077542 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.331547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.335448 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:51.336576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.572984 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:51.829083 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:51.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:51.837328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.072950 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.329542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.335485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:52.335539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.572971 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:52.828509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:52.836901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:52.837310 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.074048 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.333265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.335372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.336434 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:53.574864 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:53.830933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:53.838072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:53.839851 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.074866 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.338983 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.339799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:54.344377 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.574702 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:54.828114 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:54.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:54.837122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.074420 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:55.329544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:55.336073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:55.336305 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:55.578331 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.005987 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.006040 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.008625 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.083827 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.328560 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.335079 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.335136 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:56.575579 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:56.830373 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:56.835033 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:56.835179 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.087195 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.332845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.337372 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.338029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:57.576538 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:57.589639 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:56:57.830334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:57.836937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:57.838662 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.112247 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.336059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.348974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.350146 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.573280 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:58.842857 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:58.842873 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:58.842888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:58.924998 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.335308989s)
	W1013 13:56:58.925066 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:58.925097 1815551 retry.go:31] will retry after 13.924020767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:56:59.072616 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.329181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.335127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.335993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:56:59.575343 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:56:59.830551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:56:59.836400 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:56:59.837278 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.078387 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.333707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.375230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:00.376823 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.572444 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:00.829334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:00.835575 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:00.835799 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.079304 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.330385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.335250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:01.581487 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:01.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:01.837221 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:01.837449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.078263 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:02.330056 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:02.339092 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:02.339093 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:02.577091 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.077029 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.077446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.077527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.154987 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.328809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.335973 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.336466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:03.574053 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:03.832304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:03.836898 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:03.837250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.072871 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.329704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.335648 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:04.573740 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:04.828297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:04.838545 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:04.839359 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.073273 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.331167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.337263 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:05.339875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.572747 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:05.831331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:05.842003 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:05.930357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.076706 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.328910 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.336063 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.343356 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:06.584114 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:06.830148 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:06.835936 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:06.837800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.073829 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.332895 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.335938 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:07.336485 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.573658 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:07.829535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:07.834609 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:07.841665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.077534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.328984 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.333490 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.335036 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:08.574315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:08.830309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:08.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:08.838864 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.075894 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.330037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.335138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.336913 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:09.572525 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:09.828315 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:09.835125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 13:57:09.835169 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.074415 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.330449 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.334152 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:10.338372 1815551 kapi.go:107] duration metric: took 51.507291615s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 13:57:10.573600 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:10.829312 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:10.834624 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.073690 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.329540 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.334164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:11.575859 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:11.829406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:11.834682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.073929 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.328430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.335019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.574762 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:12.828887 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:12.833318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:12.849353 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:13.075935 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:13.329099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.336236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:13.573534 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 13:57:13.587679 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.587745 1815551 retry.go:31] will retry after 13.672716628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:13.828261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:13.835435 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.073229 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.328789 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.334388 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:14.573428 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:14.829403 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:14.834752 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.074458 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.330167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.334526 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:15.573869 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:15.828247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:15.834508 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.073598 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.329584 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.335058 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:16.573770 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:16.829437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:16.834668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.073034 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.330112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.334151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:17.572834 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:17.827923 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:17.834428 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.074227 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.332800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.338122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:18.574366 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:18.829944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:18.835390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.073063 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.330933 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.334816 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:19.578792 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:19.829059 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:19.834174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.073867 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.328553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.335769 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:20.577315 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:20.828820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:20.834111 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.074340 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.348186 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.348277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:21.577133 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:21.828486 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:21.835130 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.074094 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.329573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.336976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:22.576302 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:22.829112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:22.835023 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.073276 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.332360 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:23.574812 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:23.828888 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:23.836976 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.073895 1815551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 13:57:24.329298 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.345232 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:24.573291 1815551 kapi.go:107] duration metric: took 1m11.00441945s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 13:57:24.829727 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:24.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.328687 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.335809 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:25.830863 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:25.833805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335112 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:26.335646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.829658 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:26.834781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.261314 1815551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 13:57:27.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.335935 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:27.840969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:27.841226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.331295 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.336284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:28.567555 1815551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.306188084s)
	W1013 13:57:28.567634 1815551 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 13:57:28.567738 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.567757 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568060 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568121 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568134 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 13:57:28.568150 1815551 main.go:141] libmachine: Making call to close driver server
	I1013 13:57:28.568163 1815551 main.go:141] libmachine: (addons-214022) Calling .Close
	I1013 13:57:28.568426 1815551 main.go:141] libmachine: (addons-214022) DBG | Closing plugin on server side
	I1013 13:57:28.568464 1815551 main.go:141] libmachine: Successfully made call to close driver server
	I1013 13:57:28.568475 1815551 main.go:141] libmachine: Making call to close connection to plugin binary
	W1013 13:57:28.568614 1815551 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 13:57:28.828678 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:28.834833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.329605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 13:57:29.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:29.829667 1815551 kapi.go:107] duration metric: took 1m8.005042215s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 13:57:29.831603 1815551 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-214022 cluster.
	I1013 13:57:29.832969 1815551 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 13:57:29.834368 1815551 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 13:57:29.835165 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.335102 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:30.834820 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.337927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:31.836162 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.334652 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:32.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.335329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:33.836940 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.335265 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:34.835299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.334493 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:35.835958 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.336901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:36.836037 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.334865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:37.835645 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.335331 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:38.835376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.334760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:39.835451 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.335213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:40.835487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.334559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:41.835709 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.336510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:42.835078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.334427 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:43.835800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:44.836213 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:45.835870 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.336474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:46.835258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.335636 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:47.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.335125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:48.835336 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.334300 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:49.834511 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.334734 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:50.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.336483 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:51.835357 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.334098 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:52.834039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.336018 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:53.836261 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.334061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:54.834919 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.334649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:55.835154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.336354 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:56.834937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.335025 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:57.835808 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:58.835220 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.335287 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:57:59.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.336327 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:00.836514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.335176 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:01.835391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.335754 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:02.834954 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.337125 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:03.836950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.335741 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:04.835238 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.334514 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:05.836800 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.335199 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:06.834223 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.334374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:07.834313 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.335017 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:08.836739 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:09.836138 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:10.837760 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.335601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:11.834423 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.335277 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:12.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.334190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:13.835779 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.335566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:14.834803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.335076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:15.834352 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.337145 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:16.836318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.335627 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:17.834879 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.335150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:18.834450 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.335022 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:19.836226 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.335160 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:20.836271 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.335097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:21.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.335103 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:22.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.335568 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:23.836839 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.335318 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:24.836164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.334826 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:25.835127 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.336865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:26.836135 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:27.835724 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.336673 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:28.835150 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.334589 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:29.834578 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.335334 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:30.835296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.335639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:31.836101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.334964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:32.835761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.335325 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:33.836391 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.335041 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:34.836020 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.335603 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:35.834446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.336822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:36.835728 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.335299 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:37.834134 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.335154 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:38.836561 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.336212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:39.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:40.835791 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.335558 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:41.835276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.335841 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:42.836019 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.335293 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:43.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.334744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:44.834701 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.335446 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:45.835594 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.337105 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:46.834479 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.335535 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:47.835194 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.335256 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:48.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.336078 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:49.835454 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.335291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:50.835631 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.336375 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:51.835517 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.335533 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:52.835668 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.334675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:53.836765 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.335738 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:54.835614 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.334992 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:55.834761 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.335487 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:56.835039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.335024 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:57.835393 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.335510 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:58.834835 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.335247 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:58:59.835193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.337646 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:00.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.334671 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:01.835950 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.335072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:02.835262 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.336068 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:03.838250 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.336473 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:04.834196 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.335794 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:05.835516 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.336890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:06.835562 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.336117 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:07.835027 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.336076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:08.835382 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.334500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:09.835763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.335780 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:10.834829 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.335922 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:11.835807 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.335268 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:12.835042 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.334861 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:13.835742 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:14.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:15.835542 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.336308 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:16.834819 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.334458 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:17.834430 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:18.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.334302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:19.834698 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.335242 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:20.837355 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.334901 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:21.835822 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.335481 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:22.835077 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.335379 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:23.835858 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.335030 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:24.834848 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.334406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:25.835970 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.336845 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:26.835639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.334566 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:27.834610 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.335758 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:28.834181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.335230 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:29.836521 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.335115 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:30.834296 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.334011 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:31.835572 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.334655 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:32.837467 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.334547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:33.835937 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.335478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:34.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.334801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:35.834872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.335872 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:36.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.335101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:37.834089 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.334927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:38.835775 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.334557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:39.834110 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.336120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:40.835608 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.338054 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:41.835852 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.335214 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:42.835500 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.334478 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:43.835206 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.335016 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:44.835509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.334080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:45.835482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.336619 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:46.835408 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.334489 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:47.834778 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.334764 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:48.836472 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.334637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:49.834969 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.335466 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:50.835297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:51.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.336616 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:52.835557 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.335389 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:53.837280 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.335407 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:54.835989 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.334416 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:55.834967 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.336883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:56.835437 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.334771 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:57.836376 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.334601 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:58.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.334699 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 13:59:59.834770 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.334874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:00.835696 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.335335 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:01.836061 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.334551 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:02.836309 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.335167 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:03.835702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.334763 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:04.835576 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.335505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:05.835798 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.335506 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:06.836329 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.335321 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:07.834801 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.334908 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:08.835943 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.335962 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:09.836396 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.335654 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:10.835633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.335803 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:11.835579 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.334633 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:12.835288 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.335151 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:13.835600 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.335509 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:14.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.336050 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:15.835564 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.335649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:16.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.335190 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:17.834455 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.334544 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:18.835370 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.335502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:19.834672 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.334781 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:20.834666 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.335482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:21.835748 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.335284 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:22.835158 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.337417 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:23.835644 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.335243 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:24.835634 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.335832 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:25.836076 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.336097 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:26.835499 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.334133 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:27.837258 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.334598 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:28.835174 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.335615 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:29.835346 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.334875 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:30.835362 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.335392 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:31.834868 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:32.835890 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.336384 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:33.835565 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.334702 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:34.836069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.335345 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:35.835340 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.338240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:36.836180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.336383 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:37.835503 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:38.836328 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.333988 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:39.835120 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.335216 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:40.836465 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.334886 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:41.836108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.336180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:42.836086 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.335099 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:43.836475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.334621 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:44.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.334707 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:45.835907 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.336386 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:46.834665 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.334390 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:47.834903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.333981 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:48.836628 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.335276 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:49.835518 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.334588 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:50.835824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.338905 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:51.836639 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.335704 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:52.835552 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.334682 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:53.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.335635 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:54.835001 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.334830 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:55.834874 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.336549 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:56.838494 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.335810 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:57.834944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.335374 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:58.834675 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.335833 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:00:59.836291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.334291 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:00.835818 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.335302 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:01.836497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.334553 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:02.834695 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.335580 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:03.835495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.336475 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:04.834974 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.335889 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:05.835181 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.336380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:06.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.336442 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:07.834531 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.335397 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:08.834456 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.337231 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:09.834677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.335412 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:10.835602 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.336539 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:11.835527 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:12.835688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.335233 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:13.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.335877 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:14.836559 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.335297 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:15.837219 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.336121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:16.834649 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.336482 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:17.834805 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.335108 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:18.834964 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.335574 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:19.834926 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.335903 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:20.835661 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.337729 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:21.835944 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.335445 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:22.834840 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.336497 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:23.835735 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.336414 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:24.835122 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.335039 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:25.835080 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.336069 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:26.835239 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.335177 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:27.835351 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.335126 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:28.835180 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.335028 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:29.835406 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.334198 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:30.835164 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.336224 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:31.836107 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.336440 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:32.835883 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.336101 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:33.835094 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.334705 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:34.836586 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.335865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:35.834824 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.336836 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:36.836236 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.334530 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:37.836132 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.334326 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:38.834953 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.336330 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:39.834343 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.334470 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:40.835865 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.336394 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:41.834746 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.336193 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:42.835282 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.334495 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:43.835755 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:44.835573 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.335010 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:45.835070 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.337081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:46.836917 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.336075 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:47.836303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.335543 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:48.835842 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.336304 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:49.835123 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.334303 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:50.836073 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.337121 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:51.834790 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.335474 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:52.835147 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.334622 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:53.834679 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.334975 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:54.835505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.335547 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:55.834320 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.337072 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:56.835338 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.334677 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:57.835088 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.334605 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:58.834688 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.336323 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:01:59.835956 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.336504 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:00.836995 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.335212 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:01.834385 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.335476 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:02.835502 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.335371 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:03.836012 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.335744 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:04.834380 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.335240 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:05.835337 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.335893 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:06.834620 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.335637 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:07.834524 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.334081 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:08.835413 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.334814 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:09.834505 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.335015 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:10.835005 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.336275 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:11.835387 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.335267 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:12.835234 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.335689 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:13.835131 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.336968 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:14.835611 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.335211 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:15.835927 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.337411 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:16.834441 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.335062 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:17.835993 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.336191 1815551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 14:02:18.831884 1815551 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1013 14:02:18.831927 1815551 kapi.go:107] duration metric: took 6m0.001279478s to wait for kubernetes.io/minikube-addons=registry ...
	W1013 14:02:18.832048 1815551 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1013 14:02:18.834028 1815551 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, default-storageclass, volcano, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, csi-hostpath-driver, ingress, gcp-auth
	I1013 14:02:18.835547 1815551 addons.go:514] duration metric: took 6m16.456841938s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin default-storageclass volcano metrics-server yakd storage-provisioner-rancher volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I1013 14:02:18.835619 1815551 start.go:246] waiting for cluster config update ...
	I1013 14:02:18.835653 1815551 start.go:255] writing updated cluster config ...
	I1013 14:02:18.835985 1815551 ssh_runner.go:195] Run: rm -f paused
	I1013 14:02:18.844672 1815551 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:18.850989 1815551 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.858822 1815551 pod_ready.go:94] pod "coredns-66bc5c9577-h4thg" is "Ready"
	I1013 14:02:18.858851 1815551 pod_ready.go:86] duration metric: took 7.830127ms for pod "coredns-66bc5c9577-h4thg" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.861510 1815551 pod_ready.go:83] waiting for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.866947 1815551 pod_ready.go:94] pod "etcd-addons-214022" is "Ready"
	I1013 14:02:18.866978 1815551 pod_ready.go:86] duration metric: took 5.438269ms for pod "etcd-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.870108 1815551 pod_ready.go:83] waiting for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.876071 1815551 pod_ready.go:94] pod "kube-apiserver-addons-214022" is "Ready"
	I1013 14:02:18.876101 1815551 pod_ready.go:86] duration metric: took 5.952573ms for pod "kube-apiserver-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:18.879444 1815551 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.250700 1815551 pod_ready.go:94] pod "kube-controller-manager-addons-214022" is "Ready"
	I1013 14:02:19.250743 1815551 pod_ready.go:86] duration metric: took 371.273475ms for pod "kube-controller-manager-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.452146 1815551 pod_ready.go:83] waiting for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:19.850363 1815551 pod_ready.go:94] pod "kube-proxy-m9kg9" is "Ready"
	I1013 14:02:19.850396 1815551 pod_ready.go:86] duration metric: took 398.220518ms for pod "kube-proxy-m9kg9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.050567 1815551 pod_ready.go:83] waiting for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449725 1815551 pod_ready.go:94] pod "kube-scheduler-addons-214022" is "Ready"
	I1013 14:02:20.449765 1815551 pod_ready.go:86] duration metric: took 399.169231ms for pod "kube-scheduler-addons-214022" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:02:20.449779 1815551 pod_ready.go:40] duration metric: took 1.605053066s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:02:20.499765 1815551 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 14:02:20.501422 1815551 out.go:179] * Done! kubectl is now configured to use "addons-214022" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	4b9c2b1e8388b       56cc512116c8f       2 minutes ago       Running             busybox                                  0                   c2017033bd492       busybox
	d6a3c830fdead       1bec18b3728e7       14 minutes ago      Running             controller                               0                   b82d6ab22225e       ingress-nginx-controller-9cc49f96f-7jf8g
	dc9eac6946abb       738351fd438f0       14 minutes ago      Running             csi-snapshotter                          0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	caf59fa52cf6c       931dbfd16f87c       14 minutes ago      Running             csi-provisioner                          0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	dcdb3cedeedc5       e899260153aed       14 minutes ago      Running             liveness-probe                           0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	20320037960be       e255e073c508c       14 minutes ago      Running             hostpath                                 0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	251c9387cb3f1       88ef14a257f42       14 minutes ago      Running             node-driver-registrar                    0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	4bf53d30ff2bf       19a639eda60f0       14 minutes ago      Running             csi-resizer                              0                   38173b2da332e       csi-hostpath-resizer-0
	da92c998f6d36       a1ed5895ba635       14 minutes ago      Running             csi-external-health-monitor-controller   0                   abd9e20e6db7a       csi-hostpathplugin-4jxqs
	fdb740423cae7       aa61ee9c70bc4       14 minutes ago      Running             volume-snapshot-controller               0                   d87f7092f76cb       snapshot-controller-7d9fbc56b8-fcqg8
	d9300160a8179       59cbb42146a37       14 minutes ago      Running             csi-attacher                             0                   1571308a93146       csi-hostpath-attacher-0
	59dcea13b91a7       aa61ee9c70bc4       14 minutes ago      Running             volume-snapshot-controller               0                   fc7a88bf2bbfa       snapshot-controller-7d9fbc56b8-pnqwn
	ac9ca79606b04       8c217da6734db       14 minutes ago      Exited              patch                                    0                   82e54969531ac       ingress-nginx-admission-patch-kvlpb
	fc2247488ceef       8c217da6734db       14 minutes ago      Exited              create                                   0                   249a7d7c465c4       ingress-nginx-admission-create-rn6ng
	ade8e5a3e89a5       38dca7434d5f2       14 minutes ago      Running             gadget                                   0                   cd47cb2e122c6       gadget-lrthv
	427e1841635f7       e16d1e3a10667       14 minutes ago      Running             local-path-provisioner                   0                   b07165834017e       local-path-provisioner-648f6765c9-txczb
	55e4c7d9441ba       b1c9f9ef5f0c2       14 minutes ago      Running             registry-proxy                           0                   dbfd8a2965678       registry-proxy-qdl2b
	11373ec0dad23       b6ab53fbfedaa       14 minutes ago      Running             minikube-ingress-dns                     0                   25d666aa48ee6       kube-ingress-dns-minikube
	61d2e3b41e535       6e38f40d628db       15 minutes ago      Running             storage-provisioner                      0                   c3fcdfcb3c777       storage-provisioner
	e93bcf6b41d34       d5e667c0f2bb6       15 minutes ago      Running             amd-gpu-device-plugin                    0                   dd63ea4bfdd23       amd-gpu-device-plugin-k6tpl
	836109d2ab5d3       52546a367cc9e       15 minutes ago      Running             coredns                                  0                   475cb9ba95a73       coredns-66bc5c9577-h4thg
	0daa3279505d6       fc25172553d79       15 minutes ago      Running             kube-proxy                               0                   85474e9f38355       kube-proxy-m9kg9
	05cee8f966b49       c80c8dbafe7dd       15 minutes ago      Running             kube-controller-manager                  0                   03c96ff8163c4       kube-controller-manager-addons-214022
	b4ca1f4c451a7       5f1f5298c888d       15 minutes ago      Running             etcd                                     0                   f69d756c4a41d       etcd-addons-214022
	84834930aaa27       7dd6aaa1717ab       15 minutes ago      Running             kube-scheduler                           0                   246bc566c0147       kube-scheduler-addons-214022
	da79537fc9aee       c3994bc696102       15 minutes ago      Running             kube-apiserver                           0                   6b21f01e5cdd5       kube-apiserver-addons-214022
	
	
	==> containerd <==
	Oct 13 14:10:54 addons-214022 containerd[816]: time="2025-10-13T14:10:54.747252591Z" level=info msg="StopPodSandbox for \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\""
	Oct 13 14:10:54 addons-214022 containerd[816]: time="2025-10-13T14:10:54.820983750Z" level=info msg="shim disconnected" id=f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1 namespace=k8s.io
	Oct 13 14:10:54 addons-214022 containerd[816]: time="2025-10-13T14:10:54.821035060Z" level=warning msg="cleaning up after shim disconnected" id=f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1 namespace=k8s.io
	Oct 13 14:10:54 addons-214022 containerd[816]: time="2025-10-13T14:10:54.821047759Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 14:10:54 addons-214022 containerd[816]: time="2025-10-13T14:10:54.922743513Z" level=info msg="TearDown network for sandbox \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\" successfully"
	Oct 13 14:10:54 addons-214022 containerd[816]: time="2025-10-13T14:10:54.922796813Z" level=info msg="StopPodSandbox for \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\" returns successfully"
	Oct 13 14:10:58 addons-214022 containerd[816]: time="2025-10-13T14:10:58.537492672Z" level=info msg="StopPodSandbox for \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\""
	Oct 13 14:10:58 addons-214022 containerd[816]: time="2025-10-13T14:10:58.578629240Z" level=info msg="TearDown network for sandbox \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\" successfully"
	Oct 13 14:10:58 addons-214022 containerd[816]: time="2025-10-13T14:10:58.578687141Z" level=info msg="StopPodSandbox for \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\" returns successfully"
	Oct 13 14:10:58 addons-214022 containerd[816]: time="2025-10-13T14:10:58.579455770Z" level=info msg="RemovePodSandbox for \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\""
	Oct 13 14:10:58 addons-214022 containerd[816]: time="2025-10-13T14:10:58.579513093Z" level=info msg="Forcibly stopping sandbox \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\""
	Oct 13 14:10:58 addons-214022 containerd[816]: time="2025-10-13T14:10:58.609845850Z" level=info msg="TearDown network for sandbox \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\" successfully"
	Oct 13 14:10:58 addons-214022 containerd[816]: time="2025-10-13T14:10:58.616853327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 13 14:10:58 addons-214022 containerd[816]: time="2025-10-13T14:10:58.616956071Z" level=info msg="RemovePodSandbox \"f0e1de14957439f1d8e57193b2524dfdcda370e7181bae190f04180861632cf1\" returns successfully"
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.400002610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7,Uid:27f0937d-8365-4a09-a5e0-483da82734c6,Namespace:local-path-storage,Attempt:0,}"
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.544319188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.544789653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.544806989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.544978407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.616859629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7,Uid:27f0937d-8365-4a09-a5e0-483da82734c6,Namespace:local-path-storage,Attempt:0,} returns sandbox id \"8d4727e441ea0d9a8cd66fe98cd1fb15acaedffb6b2f9451261d256f79922433\""
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.620866081Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.626002671Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.686144724Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.784451480Z" level=error msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" failed" error="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:11:10 addons-214022 containerd[816]: time="2025-10-13T14:11:10.784581442Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=10979"
	
	
	==> coredns [836109d2ab5d3098ccc6f029d103e56da702d50a57e73f14a97ae3b019a5fa1c] <==
	[INFO] 10.244.0.8:44754 - 59184 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000112917s
	[INFO] 10.244.0.8:57854 - 36817 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000203845s
	[INFO] 10.244.0.8:57854 - 16208 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00028754s
	[INFO] 10.244.0.8:57854 - 20112 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000129275s
	[INFO] 10.244.0.8:57854 - 44652 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084328s
	[INFO] 10.244.0.8:57854 - 12391 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000082124s
	[INFO] 10.244.0.8:57854 - 10202 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000246848s
	[INFO] 10.244.0.8:57854 - 56357 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000083247s
	[INFO] 10.244.0.8:57854 - 23505 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000455411s
	[INFO] 10.244.0.8:54470 - 4565 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000205148s
	[INFO] 10.244.0.8:54470 - 11280 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000276765s
	[INFO] 10.244.0.8:54470 - 21382 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000138493s
	[INFO] 10.244.0.8:54470 - 63399 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000197533s
	[INFO] 10.244.0.8:54470 - 56241 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000075915s
	[INFO] 10.244.0.8:54470 - 28366 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000095953s
	[INFO] 10.244.0.8:54470 - 56204 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000133282s
	[INFO] 10.244.0.8:54470 - 12926 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000393581s
	[INFO] 10.244.0.8:46631 - 63313 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.0001805s
	[INFO] 10.244.0.8:46631 - 46707 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000752466s
	[INFO] 10.244.0.8:46631 - 4566 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000410638s
	[INFO] 10.244.0.8:46631 - 42973 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000322734s
	[INFO] 10.244.0.8:46631 - 63347 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000084562s
	[INFO] 10.244.0.8:46631 - 48986 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000320915s
	[INFO] 10.244.0.8:46631 - 20743 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000265944s
	[INFO] 10.244.0.8:46631 - 27369 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000304078s
	
	
	==> describe nodes <==
	Name:               addons-214022
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214022
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=addons-214022
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T13_55_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214022
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-214022"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 13:55:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214022
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:11:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:09:35 +0000   Mon, 13 Oct 2025 13:55:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-214022
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 c368161c275346d2a9ea3f8a7f4ac862
	  System UUID:                c368161c-2753-46d2-a9ea-3f8a7f4ac862
	  Boot ID:                    687454d4-3e74-47c7-85c1-524150a13269
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  gadget                      gadget-lrthv                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-7jf8g                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         15m
	  kube-system                 amd-gpu-device-plugin-k6tpl                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-h4thg                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpathplugin-4jxqs                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-addons-214022                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-214022                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-214022                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-m9kg9                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-214022                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 registry-66898fdd98-qpt8q                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 registry-creds-764b6fb674-rsjlm                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 registry-proxy-qdl2b                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 snapshot-controller-7d9fbc56b8-fcqg8                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 snapshot-controller-7d9fbc56b8-pnqwn                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-txczb                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-bl6xb                                0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (9%)  426Mi (10%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-214022 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-214022 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-214022 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node addons-214022 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-214022 event: Registered Node addons-214022 in Controller
	
	
	==> dmesg <==
	[  +0.112005] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.097255] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.134471] kauditd_printk_skb: 171 callbacks suppressed
	[Oct13 13:56] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.000102] kauditd_printk_skb: 285 callbacks suppressed
	[  +1.171734] kauditd_printk_skb: 342 callbacks suppressed
	[  +0.188548] kauditd_printk_skb: 340 callbacks suppressed
	[ +10.023317] kauditd_printk_skb: 173 callbacks suppressed
	[ +11.926739] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.270838] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.901459] kauditd_printk_skb: 26 callbacks suppressed
	[Oct13 13:57] kauditd_printk_skb: 117 callbacks suppressed
	[  +1.255372] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.000037] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.136427] kauditd_printk_skb: 50 callbacks suppressed
	[  +4.193430] kauditd_printk_skb: 68 callbacks suppressed
	[Oct13 14:08] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000059] kauditd_printk_skb: 65 callbacks suppressed
	[ +12.058507] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000136] kauditd_printk_skb: 22 callbacks suppressed
	[Oct13 14:09] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.303382] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.474208] kauditd_printk_skb: 49 callbacks suppressed
	[Oct13 14:10] kauditd_printk_skb: 90 callbacks suppressed
	[Oct13 14:11] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [b4ca1f4c451a74c7ea64ca0e34512e160fbd260fd3969afb6e67fca08f49102b] <==
	{"level":"info","ts":"2025-10-13T13:57:03.066329Z","caller":"traceutil/trace.go:172","msg":"trace[1337303940] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"235.769671ms","start":"2025-10-13T13:57:02.830551Z","end":"2025-10-13T13:57:03.066321Z","steps":["trace[1337303940] 'range keys from in-memory index tree'  (duration: 235.56325ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T13:57:03.066781Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.221636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:03.066824Z","caller":"traceutil/trace.go:172","msg":"trace[1790166720] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1254; }","duration":"236.26612ms","start":"2025-10-13T13:57:02.830551Z","end":"2025-10-13T13:57:03.066818Z","steps":["trace[1790166720] 'range keys from in-memory index tree'  (duration: 236.097045ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315015Z","caller":"traceutil/trace.go:172","msg":"trace[940649486] linearizableReadLoop","detail":"{readStateIndex:1392; appliedIndex:1392; }","duration":"127.017691ms","start":"2025-10-13T13:57:23.187982Z","end":"2025-10-13T13:57:23.314999Z","steps":["trace[940649486] 'read index received'  (duration: 127.006943ms)","trace[940649486] 'applied index is now lower than readState.Index'  (duration: 4.937µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T13:57:23.315177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.178772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T13:57:23.315206Z","caller":"traceutil/trace.go:172","msg":"trace[2128069664] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:1356; }","duration":"127.222714ms","start":"2025-10-13T13:57:23.187978Z","end":"2025-10-13T13:57:23.315201Z","steps":["trace[2128069664] 'agreement among raft nodes before linearized reading'  (duration: 127.149155ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T13:57:23.315263Z","caller":"traceutil/trace.go:172","msg":"trace[1733438696] transaction","detail":"{read_only:false; response_revision:1357; number_of_response:1; }","duration":"135.233261ms","start":"2025-10-13T13:57:23.180019Z","end":"2025-10-13T13:57:23.315253Z","steps":["trace[1733438696] 'process raft request'  (duration: 135.141996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:05:52.467650Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1907}
	{"level":"info","ts":"2025-10-13T14:05:52.575208Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1907,"took":"105.568434ms","hash":1304879421,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4886528,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2025-10-13T14:05:52.575710Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1304879421,"revision":1907,"compact-revision":-1}
	{"level":"info","ts":"2025-10-13T14:09:13.842270Z","caller":"traceutil/trace.go:172","msg":"trace[1885689359] linearizableReadLoop","detail":"{readStateIndex:3177; appliedIndex:3177; }","duration":"274.560471ms","start":"2025-10-13T14:09:13.567649Z","end":"2025-10-13T14:09:13.842209Z","steps":["trace[1885689359] 'read index received'  (duration: 274.551109ms)","trace[1885689359] 'applied index is now lower than readState.Index'  (duration: 8.253µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:13.906716Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.580668ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T14:09:13.906823Z","caller":"traceutil/trace.go:172","msg":"trace[1704629397] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2982; }","duration":"187.730839ms","start":"2025-10-13T14:09:13.719077Z","end":"2025-10-13T14:09:13.906808Z","steps":["trace[1704629397] 'range keys from in-memory index tree'  (duration: 187.538324ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T14:09:13.906973Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"339.314013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ipaddresses/10.101.151.157\" limit:1 ","response":"range_response_count:1 size:604"}
	{"level":"info","ts":"2025-10-13T14:09:13.907424Z","caller":"traceutil/trace.go:172","msg":"trace[692800306] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2983; }","duration":"346.864291ms","start":"2025-10-13T14:09:13.560497Z","end":"2025-10-13T14:09:13.907361Z","steps":["trace[692800306] 'process raft request'  (duration: 281.825137ms)","trace[692800306] 'compare'  (duration: 64.828079ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T14:09:13.907508Z","caller":"traceutil/trace.go:172","msg":"trace[107743050] range","detail":"{range_begin:/registry/ipaddresses/10.101.151.157; range_end:; response_count:1; response_revision:2982; }","duration":"339.484538ms","start":"2025-10-13T14:09:13.567635Z","end":"2025-10-13T14:09:13.907120Z","steps":["trace[107743050] 'agreement among raft nodes before linearized reading'  (duration: 274.852745ms)","trace[107743050] 'range keys from in-memory index tree'  (duration: 64.106294ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T14:09:13.907801Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.567617Z","time spent":"339.918526ms","remote":"127.0.0.1:33944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":627,"request content":"key:\"/registry/ipaddresses/10.101.151.157\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T14:09:13.908101Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.560488Z","time spent":"346.985335ms","remote":"127.0.0.1:33882","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":61,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" mod_revision:2971 > success:<request_delete_range:<key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" > > failure:<request_range:<key:\"/registry/endpointslices/kube-system/metrics-server-hlhls\" > >"}
	{"level":"info","ts":"2025-10-13T14:09:13.908220Z","caller":"traceutil/trace.go:172","msg":"trace[2073246272] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2983; }","duration":"347.573522ms","start":"2025-10-13T14:09:13.560640Z","end":"2025-10-13T14:09:13.908213Z","steps":["trace[2073246272] 'process raft request'  (duration: 346.576205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T14:09:13.908282Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T14:09:13.560629Z","time spent":"347.615581ms","remote":"127.0.0.1:33684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":59,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/metrics-server\" mod_revision:2972 > success:<request_delete_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > > failure:<request_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > >"}
	{"level":"warn","ts":"2025-10-13T14:09:13.910053Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.064409ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T14:09:13.910727Z","caller":"traceutil/trace.go:172","msg":"trace[1060924441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2983; }","duration":"217.741397ms","start":"2025-10-13T14:09:13.692976Z","end":"2025-10-13T14:09:13.910718Z","steps":["trace[1060924441] 'agreement among raft nodes before linearized reading'  (duration: 216.722483ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T14:10:52.476707Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2368}
	{"level":"info","ts":"2025-10-13T14:10:52.510907Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2368,"took":"32.98551ms","hash":1037835104,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5537792,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-13T14:10:52.510982Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1037835104,"revision":2368,"compact-revision":1907}
	
	
	==> kernel <==
	 14:11:24 up 16 min,  0 users,  load average: 0.91, 0.98, 0.75
	Linux addons-214022 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [da79537fc9aee4eda997318cc0aeef07f5a4e3bbd4aed4282ff9e486eecb0cd7] <==
	I1013 14:08:24.534569       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:24.913458       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.024102       1 handler.go:285] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.588117       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.763275       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1013 14:08:25.806287       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1013 14:08:25.836075       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.910579       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1013 14:08:25.938831       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W1013 14:08:26.095661       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1013 14:08:26.314291       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:26.607638       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	I1013 14:08:26.637481       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:26.689652       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1013 14:08:26.941141       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1013 14:08:26.941574       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1013 14:08:26.961310       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	I1013 14:08:27.080209       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1013 14:08:27.138121       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1013 14:08:28.080963       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1013 14:08:28.086493       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1013 14:08:45.022422       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:40132: use of closed network connection
	E1013 14:08:45.229592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:40168: use of closed network connection
	I1013 14:08:54.741628       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.41.148"}
	I1013 14:09:48.903970       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [05cee8f966b4938e3d1606d404d9401b9949f288ba68c08a76c3856610945ee7] <==
	E1013 14:10:19.470752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:10:30.543613       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:10:30.545148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:10:30.762327       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:10:30.763484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:10:32.725028       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:10:32.726281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:10:35.932555       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:10:35.933753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:10:39.797898       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:10:39.799294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:10:46.341675       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:10:46.343585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:10:46.357803       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:10:46.359763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:11:04.132089       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:11:04.133454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:11:05.426663       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:11:05.428272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:11:17.797512       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:11:17.799102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:11:20.222173       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:11:20.224317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 14:11:22.424612       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 14:11:22.427273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [0daa3279505d674c83f3e6813f82b58744dbeede0c9d8a5f5e902c9d9cca7441] <==
	I1013 13:56:04.284946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 13:56:04.385972       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 13:56:04.386554       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.214"]
	E1013 13:56:04.387583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 13:56:04.791284       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 13:56:04.792086       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 13:56:04.792127       1 server_linux.go:132] "Using iptables Proxier"
	I1013 13:56:04.830526       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 13:56:04.832819       1 server.go:527] "Version info" version="v1.34.1"
	I1013 13:56:04.832853       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 13:56:04.853725       1 config.go:200] "Starting service config controller"
	I1013 13:56:04.853757       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 13:56:04.853901       1 config.go:106] "Starting endpoint slice config controller"
	I1013 13:56:04.853927       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 13:56:04.854547       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 13:56:04.854575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 13:56:04.862975       1 config.go:309] "Starting node config controller"
	I1013 13:56:04.863007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 13:56:04.863015       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 13:56:04.956286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 13:56:04.956330       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 13:56:04.957110       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [84834930aaa277a8e849b685332e6fb4b453bbc88da065fb1d682e6c39de1c89] <==
	E1013 13:55:54.569998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:54.570036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:54.570113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:54.570148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:54.570176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 13:55:54.570210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:54.570246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 13:55:54.569635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:54.571687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.412211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 13:55:55.434014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 13:55:55.466581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 13:55:55.489914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 13:55:55.548770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 13:55:55.605071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 13:55:55.677154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 13:55:55.682700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 13:55:55.710259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 13:55:55.717675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 13:55:55.763499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 13:55:55.780817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 13:55:55.877364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 13:55:55.895577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 13:55:55.926098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1013 13:55:58.161609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 14:10:55 addons-214022 kubelet[1511]: I1013 14:10:55.064874    1511 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbe7958-8690-444d-8cd7-c8b12f0ea5ff-kube-api-access-qtx7n" (OuterVolumeSpecName: "kube-api-access-qtx7n") pod "afbe7958-8690-444d-8cd7-c8b12f0ea5ff" (UID: "afbe7958-8690-444d-8cd7-c8b12f0ea5ff"). InnerVolumeSpecName "kube-api-access-qtx7n". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 13 14:10:55 addons-214022 kubelet[1511]: I1013 14:10:55.162038    1511 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/afbe7958-8690-444d-8cd7-c8b12f0ea5ff-script\") on node \"addons-214022\" DevicePath \"\""
	Oct 13 14:10:55 addons-214022 kubelet[1511]: I1013 14:10:55.162098    1511 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qtx7n\" (UniqueName: \"kubernetes.io/projected/afbe7958-8690-444d-8cd7-c8b12f0ea5ff-kube-api-access-qtx7n\") on node \"addons-214022\" DevicePath \"\""
	Oct 13 14:10:55 addons-214022 kubelet[1511]: I1013 14:10:55.162109    1511 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/afbe7958-8690-444d-8cd7-c8b12f0ea5ff-data\") on node \"addons-214022\" DevicePath \"\""
	Oct 13 14:10:55 addons-214022 kubelet[1511]: I1013 14:10:55.375807    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:10:55 addons-214022 kubelet[1511]: E1013 14:10:55.377659    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:10:57 addons-214022 kubelet[1511]: I1013 14:10:57.379140    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbe7958-8690-444d-8cd7-c8b12f0ea5ff" path="/var/lib/kubelet/pods/afbe7958-8690-444d-8cd7-c8b12f0ea5ff/volumes"
	Oct 13 14:11:03 addons-214022 kubelet[1511]: E1013 14:11:03.377751    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/marcnuri/yakd/manifests/sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-bl6xb" podUID="9b696edf-33b0-4b8c-a0c6-b17b9bb067fa"
	Oct 13 14:11:04 addons-214022 kubelet[1511]: I1013 14:11:04.376053    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:11:07 addons-214022 kubelet[1511]: I1013 14:11:07.376670    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:11:07 addons-214022 kubelet[1511]: E1013 14:11:07.378629    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	Oct 13 14:11:07 addons-214022 kubelet[1511]: E1013 14:11:07.376695    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:11:09 addons-214022 kubelet[1511]: I1013 14:11:09.990661    1511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/27f0937d-8365-4a09-a5e0-483da82734c6-script\") pod \"helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7\" (UID: \"27f0937d-8365-4a09-a5e0-483da82734c6\") " pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7"
	Oct 13 14:11:09 addons-214022 kubelet[1511]: I1013 14:11:09.990765    1511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/27f0937d-8365-4a09-a5e0-483da82734c6-data\") pod \"helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7\" (UID: \"27f0937d-8365-4a09-a5e0-483da82734c6\") " pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7"
	Oct 13 14:11:09 addons-214022 kubelet[1511]: I1013 14:11:09.990798    1511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brqs7\" (UniqueName: \"kubernetes.io/projected/27f0937d-8365-4a09-a5e0-483da82734c6-kube-api-access-brqs7\") pod \"helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7\" (UID: \"27f0937d-8365-4a09-a5e0-483da82734c6\") " pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7"
	Oct 13 14:11:10 addons-214022 kubelet[1511]: E1013 14:11:10.784910    1511 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 13 14:11:10 addons-214022 kubelet[1511]: E1013 14:11:10.784966    1511 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 13 14:11:10 addons-214022 kubelet[1511]: E1013 14:11:10.785244    1511 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7_local-path-storage(27f0937d-8365-4a09-a5e0-483da82734c6): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:11:10 addons-214022 kubelet[1511]: E1013 14:11:10.785357    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7" podUID="27f0937d-8365-4a09-a5e0-483da82734c6"
	Oct 13 14:11:11 addons-214022 kubelet[1511]: I1013 14:11:11.375827    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qdl2b" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:11:11 addons-214022 kubelet[1511]: E1013 14:11:11.577654    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7" podUID="27f0937d-8365-4a09-a5e0-483da82734c6"
	Oct 13 14:11:15 addons-214022 kubelet[1511]: E1013 14:11:15.377292    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/marcnuri/yakd/manifests/sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-bl6xb" podUID="9b696edf-33b0-4b8c-a0c6-b17b9bb067fa"
	Oct 13 14:11:20 addons-214022 kubelet[1511]: E1013 14:11:20.375735    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="bda8657d-2e14-4dc2-9e93-ecb85c37f5ed"
	Oct 13 14:11:21 addons-214022 kubelet[1511]: I1013 14:11:21.375245    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-qpt8q" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 14:11:21 addons-214022 kubelet[1511]: E1013 14:11:21.376955    1511 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/registry/manifests/sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-qpt8q" podUID="4a93c83e-03fe-4e05-909f-bd2339c6559f"
	
	
	==> storage-provisioner [61d2e3b41e535c2d6e45412739c6b7e475d5a6aef5eb620041ffb9e4f7f53d5d] <==
	W1013 14:10:58.845682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:00.851110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:00.858006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:02.863596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:02.869893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:04.874653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:04.880003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:06.883943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:06.889847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:08.894006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:08.900269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:10.904355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:10.910714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:12.915225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:12.920879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:14.924192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:14.933085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:16.937565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:16.943578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:18.946609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:18.954354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:20.958847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:20.964998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:22.970104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:11:22.981666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214022 -n addons-214022
helpers_test.go:269: (dbg) Run:  kubectl --context addons-214022 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7 yakd-dashboard-5ff678cb9-bl6xb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Yakd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-214022 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7 yakd-dashboard-5ff678cb9-bl6xb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-214022 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7 yakd-dashboard-5ff678cb9-bl6xb: exit status 1 (105.204952ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214022/192.168.39.214
	Start Time:       Mon, 13 Oct 2025 14:09:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cpq8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-cpq8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m10s                default-scheduler  Successfully assigned default/task-pv-pod to addons-214022
	  Normal   Pulling    44s (x4 over 2m10s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     44s (x4 over 2m9s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     44s (x4 over 2m9s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    5s (x8 over 2m9s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     5s (x8 over 2m9s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wxvk (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-8wxvk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rn6ng" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kvlpb" not found
	Error from server (NotFound): pods "registry-66898fdd98-qpt8q" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-rsjlm" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-bl6xb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-214022 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-rn6ng ingress-nginx-admission-patch-kvlpb registry-66898fdd98-qpt8q registry-creds-764b6fb674-rsjlm helper-pod-create-pvc-55a728ff-90af-4dc3-86a6-89940ab549a7 yakd-dashboard-5ff678cb9-bl6xb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 addons disable yakd --alsologtostderr -v=1: (5.842479824s)
--- FAIL: TestAddons/parallel/Yakd (128.82s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-608191 --alsologtostderr -v=1]
E1013 14:32:20.513890 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-608191 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-608191 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-608191 --alsologtostderr -v=1] stderr:
I1013 14:31:19.428418 1831970 out.go:360] Setting OutFile to fd 1 ...
I1013 14:31:19.428681 1831970 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:31:19.428689 1831970 out.go:374] Setting ErrFile to fd 2...
I1013 14:31:19.428693 1831970 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:31:19.428925 1831970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
I1013 14:31:19.429232 1831970 mustload.go:65] Loading cluster: functional-608191
I1013 14:31:19.429576 1831970 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:31:19.429972 1831970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:31:19.430039 1831970 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:31:19.444752 1831970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
I1013 14:31:19.445284 1831970 main.go:141] libmachine: () Calling .GetVersion
I1013 14:31:19.445897 1831970 main.go:141] libmachine: Using API Version  1
I1013 14:31:19.445932 1831970 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:31:19.446339 1831970 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:31:19.446559 1831970 main.go:141] libmachine: (functional-608191) Calling .GetState
I1013 14:31:19.448377 1831970 host.go:66] Checking if "functional-608191" exists ...
I1013 14:31:19.448821 1831970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:31:19.448902 1831970 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:31:19.463237 1831970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32815
I1013 14:31:19.463686 1831970 main.go:141] libmachine: () Calling .GetVersion
I1013 14:31:19.464270 1831970 main.go:141] libmachine: Using API Version  1
I1013 14:31:19.464292 1831970 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:31:19.464737 1831970 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:31:19.464980 1831970 main.go:141] libmachine: (functional-608191) Calling .DriverName
I1013 14:31:19.465142 1831970 api_server.go:166] Checking apiserver status ...
I1013 14:31:19.465193 1831970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1013 14:31:19.465225 1831970 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
I1013 14:31:19.468355 1831970 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:31:19.468775 1831970 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
I1013 14:31:19.468809 1831970 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:31:19.468967 1831970 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
I1013 14:31:19.469157 1831970 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
I1013 14:31:19.469298 1831970 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
I1013 14:31:19.469433 1831970 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
I1013 14:31:19.565384 1831970 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5795/cgroup
W1013 14:31:19.579352 1831970 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5795/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1013 14:31:19.579427 1831970 ssh_runner.go:195] Run: ls
I1013 14:31:19.585024 1831970 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8441/healthz ...
I1013 14:31:19.590882 1831970 api_server.go:279] https://192.168.39.10:8441/healthz returned 200:
ok
W1013 14:31:19.590931 1831970 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1013 14:31:19.591099 1831970 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:31:19.591114 1831970 addons.go:69] Setting dashboard=true in profile "functional-608191"
I1013 14:31:19.591121 1831970 addons.go:238] Setting addon dashboard=true in "functional-608191"
I1013 14:31:19.591148 1831970 host.go:66] Checking if "functional-608191" exists ...
I1013 14:31:19.591417 1831970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:31:19.591455 1831970 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:31:19.605442 1831970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
I1013 14:31:19.606160 1831970 main.go:141] libmachine: () Calling .GetVersion
I1013 14:31:19.606702 1831970 main.go:141] libmachine: Using API Version  1
I1013 14:31:19.606739 1831970 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:31:19.607127 1831970 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:31:19.607625 1831970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:31:19.607692 1831970 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:31:19.621816 1831970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
I1013 14:31:19.622314 1831970 main.go:141] libmachine: () Calling .GetVersion
I1013 14:31:19.622798 1831970 main.go:141] libmachine: Using API Version  1
I1013 14:31:19.622824 1831970 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:31:19.623355 1831970 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:31:19.623607 1831970 main.go:141] libmachine: (functional-608191) Calling .GetState
I1013 14:31:19.625572 1831970 main.go:141] libmachine: (functional-608191) Calling .DriverName
I1013 14:31:19.628031 1831970 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1013 14:31:19.629514 1831970 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1013 14:31:19.630847 1831970 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1013 14:31:19.630870 1831970 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1013 14:31:19.630926 1831970 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
I1013 14:31:19.634407 1831970 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:31:19.634837 1831970 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
I1013 14:31:19.634885 1831970 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:31:19.635055 1831970 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
I1013 14:31:19.635233 1831970 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
I1013 14:31:19.635457 1831970 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
I1013 14:31:19.635619 1831970 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
I1013 14:31:19.738391 1831970 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1013 14:31:19.738419 1831970 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1013 14:31:19.761434 1831970 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1013 14:31:19.761488 1831970 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1013 14:31:19.784479 1831970 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1013 14:31:19.784514 1831970 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1013 14:31:19.809430 1831970 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1013 14:31:19.809455 1831970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1013 14:31:19.834289 1831970 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1013 14:31:19.834330 1831970 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1013 14:31:19.858324 1831970 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1013 14:31:19.858359 1831970 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1013 14:31:19.885210 1831970 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1013 14:31:19.885240 1831970 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1013 14:31:19.911897 1831970 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1013 14:31:19.911927 1831970 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1013 14:31:19.936844 1831970 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1013 14:31:19.936886 1831970 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1013 14:31:19.960603 1831970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1013 14:31:20.710363 1831970 main.go:141] libmachine: Making call to close driver server
I1013 14:31:20.710406 1831970 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:31:20.710700 1831970 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:31:20.710703 1831970 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
I1013 14:31:20.710729 1831970 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:31:20.710740 1831970 main.go:141] libmachine: Making call to close driver server
I1013 14:31:20.710748 1831970 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:31:20.711024 1831970 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:31:20.711044 1831970 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:31:20.711053 1831970 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
I1013 14:31:20.712889 1831970 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-608191 addons enable metrics-server

                                                
                                                
I1013 14:31:20.714317 1831970 addons.go:201] Writing out "functional-608191" config to set dashboard=true...
W1013 14:31:20.714581 1831970 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1013 14:31:20.715259 1831970 kapi.go:59] client config for functional-608191: &rest.Config{Host:"https://192.168.39.10:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.key", CAFile:"/home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1013 14:31:20.715750 1831970 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1013 14:31:20.715769 1831970 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1013 14:31:20.715774 1831970 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1013 14:31:20.715779 1831970 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1013 14:31:20.715782 1831970 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1013 14:31:20.726940 1831970 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  fbe6f884-b3bb-4b96-b661-dfac79173207 1428 0 2025-10-13 14:31:20 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-13 14:31:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.141.103,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.141.103],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1013 14:31:20.727182 1831970 out.go:285] * Launching proxy ...
* Launching proxy ...
I1013 14:31:20.727259 1831970 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-608191 proxy --port 36195]
I1013 14:31:20.727608 1831970 dashboard.go:157] Waiting for kubectl to output host:port ...
I1013 14:31:20.777486 1831970 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1013 14:31:20.777526 1831970 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1013 14:31:20.786831 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eb6a2d1a-2f46-451a-9b8c-55f28b72d3e5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc00081b1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I1013 14:31:20.786923 1831970 retry.go:31] will retry after 110.537µs: Temporary Error: unexpected response code: 503
I1013 14:31:20.794706 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c24cafc8-b005-49be-ab75-ed933cf18807] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc000880f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6000 TLS:<nil>}
I1013 14:31:20.794808 1831970 retry.go:31] will retry after 95.296µs: Temporary Error: unexpected response code: 503
I1013 14:31:20.799970 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1c3916e0-b304-4928-8eee-e8a85235c053] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc000b09f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf900 TLS:<nil>}
I1013 14:31:20.800059 1831970 retry.go:31] will retry after 294.937µs: Temporary Error: unexpected response code: 503
I1013 14:31:20.805669 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[47196232-f0d5-4868-97fd-c189eb2f955b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc0016c80c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I1013 14:31:20.805752 1831970 retry.go:31] will retry after 232.908µs: Temporary Error: unexpected response code: 503
I1013 14:31:20.809687 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4b32969d-5cde-45de-a2c9-c5518522c141] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc000881040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I1013 14:31:20.809797 1831970 retry.go:31] will retry after 572.243µs: Temporary Error: unexpected response code: 503
I1013 14:31:20.813485 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d0934a5f-acd2-4c4c-aab9-043fcedf11d9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc000881240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfe00 TLS:<nil>}
I1013 14:31:20.813541 1831970 retry.go:31] will retry after 426.571µs: Temporary Error: unexpected response code: 503
I1013 14:31:20.816990 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[719839da-58cc-43e7-b24e-7375a9f7aea7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc000881340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175c000 TLS:<nil>}
I1013 14:31:20.817057 1831970 retry.go:31] will retry after 1.233497ms: Temporary Error: unexpected response code: 503
I1013 14:31:20.822316 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5aff3b2e-9e46-4cdb-9e8e-52f061067d53] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc0016c81c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175c140 TLS:<nil>}
I1013 14:31:20.822364 1831970 retry.go:31] will retry after 1.355735ms: Temporary Error: unexpected response code: 503
I1013 14:31:20.828384 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c386f3e-6810-4bac-b750-e0842ebdf22f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc00081b300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I1013 14:31:20.828442 1831970 retry.go:31] will retry after 1.39181ms: Temporary Error: unexpected response code: 503
I1013 14:31:20.833579 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd9e323f-74e8-4cfe-836c-60274a02b535] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc000881440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7e00 TLS:<nil>}
I1013 14:31:20.833630 1831970 retry.go:31] will retry after 4.289686ms: Temporary Error: unexpected response code: 503
I1013 14:31:20.841890 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f6951f78-6649-42eb-b05c-eaac28159672] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc00081b440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175c280 TLS:<nil>}
I1013 14:31:20.841956 1831970 retry.go:31] will retry after 3.384373ms: Temporary Error: unexpected response code: 503
I1013 14:31:20.849191 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bc346bd8-ac62-4d13-9d0b-e9dba6b3d8c2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc000881540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c000 TLS:<nil>}
I1013 14:31:20.849247 1831970 retry.go:31] will retry after 12.389316ms: Temporary Error: unexpected response code: 503
I1013 14:31:20.866511 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5a9d478a-897e-4302-8a50-3acf08894d46] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc000881680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175c3c0 TLS:<nil>}
I1013 14:31:20.866593 1831970 retry.go:31] will retry after 11.423776ms: Temporary Error: unexpected response code: 503
I1013 14:31:20.882369 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4f73713f-6583-4d16-949e-9f0c67a9cd51] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc0016c82c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175c500 TLS:<nil>}
I1013 14:31:20.882460 1831970 retry.go:31] will retry after 25.711265ms: Temporary Error: unexpected response code: 503
I1013 14:31:20.919279 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20d12915-e576-405a-a03f-e9274b48ec61] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc000881800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I1013 14:31:20.919372 1831970 retry.go:31] will retry after 31.335256ms: Temporary Error: unexpected response code: 503
I1013 14:31:20.954790 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7f8527f0-8340-4541-8969-739cad49ce8d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc00081b580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175c640 TLS:<nil>}
I1013 14:31:20.954867 1831970 retry.go:31] will retry after 32.518349ms: Temporary Error: unexpected response code: 503
I1013 14:31:20.995019 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c717fde9-b9a9-49ab-abcf-fd3d51b27cc1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:20 GMT]] Body:0xc000881940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c140 TLS:<nil>}
I1013 14:31:20.995117 1831970 retry.go:31] will retry after 48.298835ms: Temporary Error: unexpected response code: 503
I1013 14:31:21.049063 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fbe05e0e-ac8d-447c-bdaa-95fe5ef50fce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:21 GMT]] Body:0xc00081b6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175c780 TLS:<nil>}
I1013 14:31:21.049145 1831970 retry.go:31] will retry after 132.975827ms: Temporary Error: unexpected response code: 503
I1013 14:31:21.186963 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e2c2e72d-f024-490c-a912-453cc3e68136] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:21 GMT]] Body:0xc0016c83c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c280 TLS:<nil>}
I1013 14:31:21.187029 1831970 retry.go:31] will retry after 125.294198ms: Temporary Error: unexpected response code: 503
I1013 14:31:21.317021 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f8f0faf-6841-421d-bbec-034ce4bb5703] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:21 GMT]] Body:0xc000881b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207b80 TLS:<nil>}
I1013 14:31:21.317092 1831970 retry.go:31] will retry after 310.660371ms: Temporary Error: unexpected response code: 503
I1013 14:31:21.636512 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[89e37b1c-e28b-4b27-a66a-77ff141428f9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:21 GMT]] Body:0xc000881c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175c8c0 TLS:<nil>}
I1013 14:31:21.636602 1831970 retry.go:31] will retry after 364.834629ms: Temporary Error: unexpected response code: 503
I1013 14:31:22.006294 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[065b5c8f-0f52-4a08-b964-cba9b2a05294] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:22 GMT]] Body:0xc00081b840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175ca00 TLS:<nil>}
I1013 14:31:22.006376 1831970 retry.go:31] will retry after 292.360961ms: Temporary Error: unexpected response code: 503
I1013 14:31:22.308051 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c5844e5a-57ce-4a02-84de-c09ab5a3786e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:22 GMT]] Body:0xc000881e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c3c0 TLS:<nil>}
I1013 14:31:22.308134 1831970 retry.go:31] will retry after 1.080264342s: Temporary Error: unexpected response code: 503
I1013 14:31:23.393787 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6a98f81a-2a2c-4976-89cb-01dc5842c3fe] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:23 GMT]] Body:0xc00081b900 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175cb40 TLS:<nil>}
I1013 14:31:23.393890 1831970 retry.go:31] will retry after 1.204047703s: Temporary Error: unexpected response code: 503
I1013 14:31:24.602030 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e84e9f4c-3314-4708-bddd-ff3ed6f301fc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:24 GMT]] Body:0xc0016c8540 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c500 TLS:<nil>}
I1013 14:31:24.602140 1831970 retry.go:31] will retry after 965.174149ms: Temporary Error: unexpected response code: 503
I1013 14:31:25.571815 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b2b6cbd7-5db9-43e8-aac6-0927e192dd1a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:25 GMT]] Body:0xc00081ba40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I1013 14:31:25.571894 1831970 retry.go:31] will retry after 2.444798885s: Temporary Error: unexpected response code: 503
I1013 14:31:28.023214 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c411b2c2-b414-426a-ba09-0b4a4cce1ade] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:28 GMT]] Body:0xc0016c8640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c640 TLS:<nil>}
I1013 14:31:28.023292 1831970 retry.go:31] will retry after 4.802956958s: Temporary Error: unexpected response code: 503
I1013 14:31:32.831609 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a24a81c-e172-4104-9a29-477f0af86c9e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:32 GMT]] Body:0xc0018a8040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c780 TLS:<nil>}
I1013 14:31:32.831687 1831970 retry.go:31] will retry after 5.753665667s: Temporary Error: unexpected response code: 503
I1013 14:31:38.592909 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d82ba772-3ad2-4c54-9425-2bd220b04dc4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:38 GMT]] Body:0xc00081bc00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175cc80 TLS:<nil>}
I1013 14:31:38.593001 1831970 retry.go:31] will retry after 6.268264289s: Temporary Error: unexpected response code: 503
I1013 14:31:44.865812 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[61517cc1-d36d-4251-97b0-5d7134ff940a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:44 GMT]] Body:0xc0018a8100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178c8c0 TLS:<nil>}
I1013 14:31:44.865918 1831970 retry.go:31] will retry after 11.258236309s: Temporary Error: unexpected response code: 503
I1013 14:31:56.131555 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49b7067a-1c44-4e86-83b1-605edb65a9ec] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:31:56 GMT]] Body:0xc0016c86c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178ca00 TLS:<nil>}
I1013 14:31:56.131628 1831970 retry.go:31] will retry after 14.978504166s: Temporary Error: unexpected response code: 503
I1013 14:32:11.116132 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9f58b31b-14a6-4d01-9dea-29138b41fc05] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:32:11 GMT]] Body:0xc0018a8180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00178cb40 TLS:<nil>}
I1013 14:32:11.116237 1831970 retry.go:31] will retry after 22.577238555s: Temporary Error: unexpected response code: 503
I1013 14:32:33.698906 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[15217429-8abc-4ce0-b30a-4bc610d7d870] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:32:33 GMT]] Body:0xc0016c8780 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175cdc0 TLS:<nil>}
I1013 14:32:33.698988 1831970 retry.go:31] will retry after 39.699172461s: Temporary Error: unexpected response code: 503
I1013 14:33:13.403603 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8cd74bd0-58b2-4b1b-be44-c5f6a9e52e26] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:33:13 GMT]] Body:0xc0017a4000 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00175cf00 TLS:<nil>}
I1013 14:33:13.403674 1831970 retry.go:31] will retry after 51.111364579s: Temporary Error: unexpected response code: 503
I1013 14:34:04.519878 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2a61621b-0e1f-44e3-affe-259173b3a8fa] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:34:04 GMT]] Body:0xc000250dc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002063c0 TLS:<nil>}
I1013 14:34:04.519969 1831970 retry.go:31] will retry after 42.553294198s: Temporary Error: unexpected response code: 503
I1013 14:34:47.077403 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7b4373c7-61b0-445a-b741-b67c32c16b1b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:34:47 GMT]] Body:0xc000394700 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00069c3c0 TLS:<nil>}
I1013 14:34:47.077501 1831970 retry.go:31] will retry after 40.53986941s: Temporary Error: unexpected response code: 503
I1013 14:35:27.628421 1831970 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f5352453-0b9c-4253-8b1f-82d94c39e4b2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 14:35:27 GMT]] Body:0xc0016c80c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00069c8c0 TLS:<nil>}
I1013 14:35:27.628490 1831970 retry.go:31] will retry after 1m21.875697729s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-608191 -n functional-608191
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 logs -n 25: (1.624498977s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                ARGS                                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-608191 image save kicbase/echo-server:functional-608191 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image          │ functional-608191 image rm kicbase/echo-server:functional-608191 --alsologtostderr                                                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image          │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image          │ functional-608191 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image          │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image          │ functional-608191 image save --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ start          │ -p functional-608191 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                          │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	│ start          │ -p functional-608191 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                          │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	│ start          │ -p functional-608191 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                    │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-608191 --alsologtostderr -v=1                                                                                                     │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	│ update-context │ functional-608191 update-context --alsologtostderr -v=2                                                                                                            │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ update-context │ functional-608191 update-context --alsologtostderr -v=2                                                                                                            │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ update-context │ functional-608191 update-context --alsologtostderr -v=2                                                                                                            │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ image          │ functional-608191 image ls --format short --alsologtostderr                                                                                                        │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ image          │ functional-608191 image ls --format yaml --alsologtostderr                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ ssh            │ functional-608191 ssh pgrep buildkitd                                                                                                                              │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │                     │
	│ image          │ functional-608191 image build -t localhost/my-image:functional-608191 testdata/build --alsologtostderr                                                             │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ image          │ functional-608191 image ls --format json --alsologtostderr                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ image          │ functional-608191 image ls --format table --alsologtostderr                                                                                                        │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ image          │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ service        │ functional-608191 service list                                                                                                                                     │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ service        │ functional-608191 service list -o json                                                                                                                             │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │ 13 Oct 25 14:35 UTC │
	│ service        │ functional-608191 service --namespace=default --https --url hello-node                                                                                             │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │                     │
	│ service        │ functional-608191 service hello-node --url --format={{.IP}}                                                                                                        │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │                     │
	│ service        │ functional-608191 service hello-node --url                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:35 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 14:31:19
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 14:31:19.291613 1831942 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:31:19.291999 1831942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:31:19.292017 1831942 out.go:374] Setting ErrFile to fd 2...
	I1013 14:31:19.292025 1831942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:31:19.292396 1831942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 14:31:19.293045 1831942 out.go:368] Setting JSON to false
	I1013 14:31:19.294312 1831942 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":22427,"bootTime":1760343452,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 14:31:19.294428 1831942 start.go:141] virtualization: kvm guest
	I1013 14:31:19.296444 1831942 out.go:179] * [functional-608191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 14:31:19.297978 1831942 notify.go:220] Checking for updates...
	I1013 14:31:19.297983 1831942 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 14:31:19.299274 1831942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 14:31:19.300464 1831942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 14:31:19.301569 1831942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 14:31:19.302616 1831942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 14:31:19.303778 1831942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 14:31:19.305317 1831942 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 14:31:19.305931 1831942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:31:19.305984 1831942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:31:19.320114 1831942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I1013 14:31:19.320672 1831942 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:31:19.321379 1831942 main.go:141] libmachine: Using API Version  1
	I1013 14:31:19.321408 1831942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:31:19.321835 1831942 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:31:19.322029 1831942 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:31:19.322314 1831942 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 14:31:19.322636 1831942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:31:19.322674 1831942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:31:19.337144 1831942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I1013 14:31:19.337704 1831942 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:31:19.338258 1831942 main.go:141] libmachine: Using API Version  1
	I1013 14:31:19.338283 1831942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:31:19.338647 1831942 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:31:19.338878 1831942 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:31:19.371631 1831942 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 14:31:19.373087 1831942 start.go:305] selected driver: kvm2
	I1013 14:31:19.373106 1831942 start.go:925] validating driver "kvm2" against &{Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:31:19.373215 1831942 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 14:31:19.374294 1831942 cni.go:84] Creating CNI manager for ""
	I1013 14:31:19.374351 1831942 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 14:31:19.374397 1831942 start.go:349] cluster config:
	{Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:31:19.376483 1831942 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b3815b3d85db       56cc512116c8f       11 minutes ago      Exited              mount-munger              0                   e1b3239f98d8c       busybox-mount
	54e018365168b       c3994bc696102       11 minutes ago      Running             kube-apiserver            1                   ae5cac3c5f135       kube-apiserver-functional-608191
	73c62ac23dcef       52546a367cc9e       11 minutes ago      Running             coredns                   2                   79d79fc021a2c       coredns-66bc5c9577-b59r9
	0bdcff79b6f2e       6e38f40d628db       11 minutes ago      Running             storage-provisioner       3                   31e2b1fefe43d       storage-provisioner
	9923b9c3b6134       c3994bc696102       11 minutes ago      Exited              kube-apiserver            0                   ae5cac3c5f135       kube-apiserver-functional-608191
	e3f11c67de677       c80c8dbafe7dd       11 minutes ago      Running             kube-controller-manager   2                   661659159fd35       kube-controller-manager-functional-608191
	552b6794b2ecf       7dd6aaa1717ab       11 minutes ago      Running             kube-scheduler            2                   d8c82bf329c20       kube-scheduler-functional-608191
	19906e68c850c       6e38f40d628db       11 minutes ago      Exited              storage-provisioner       2                   31e2b1fefe43d       storage-provisioner
	b3d48b09ac4ab       fc25172553d79       11 minutes ago      Running             kube-proxy                2                   cccbb832d47ca       kube-proxy-cd8b5
	c9db6877437dc       5f1f5298c888d       11 minutes ago      Running             etcd                      2                   1136f8cb2bfda       etcd-functional-608191
	ccd1d671f4ad2       c80c8dbafe7dd       12 minutes ago      Exited              kube-controller-manager   1                   661659159fd35       kube-controller-manager-functional-608191
	20139c80c2b89       7dd6aaa1717ab       12 minutes ago      Exited              kube-scheduler            1                   d8c82bf329c20       kube-scheduler-functional-608191
	0ff2c0af6db42       5f1f5298c888d       12 minutes ago      Exited              etcd                      1                   1136f8cb2bfda       etcd-functional-608191
	242b510b56dc9       fc25172553d79       12 minutes ago      Exited              kube-proxy                1                   cccbb832d47ca       kube-proxy-cd8b5
	72508a8901416       52546a367cc9e       12 minutes ago      Exited              coredns                   1                   79d79fc021a2c       coredns-66bc5c9577-b59r9
	
	
	==> containerd <==
	Oct 13 14:35:13 functional-608191 containerd[4454]: time="2025-10-13T14:35:13.542240680Z" level=warning msg="cleaning up after shim disconnected" id=heg9vxmcf4dxokzjedu03decn namespace=k8s.io
	Oct 13 14:35:13 functional-608191 containerd[4454]: time="2025-10-13T14:35:13.542361112Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 14:35:13 functional-608191 containerd[4454]: time="2025-10-13T14:35:13.860791896Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-608191\""
	Oct 13 14:35:13 functional-608191 containerd[4454]: time="2025-10-13T14:35:13.875450318Z" level=info msg="ImageCreate event name:\"sha256:9f825e5366b6d8792c2a58f5847aa69236470164c3674eba132d82f985a6c2ca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Oct 13 14:35:13 functional-608191 containerd[4454]: time="2025-10-13T14:35:13.879416883Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-608191\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Oct 13 14:35:55 functional-608191 containerd[4454]: time="2025-10-13T14:35:55.929800170Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 13 14:35:55 functional-608191 containerd[4454]: time="2025-10-13T14:35:55.933288784Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:35:56 functional-608191 containerd[4454]: time="2025-10-13T14:35:56.022072985Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:35:56 functional-608191 containerd[4454]: time="2025-10-13T14:35:56.130982652Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:35:56 functional-608191 containerd[4454]: time="2025-10-13T14:35:56.131069414Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Oct 13 14:35:56 functional-608191 containerd[4454]: time="2025-10-13T14:35:56.929318261Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 13 14:35:56 functional-608191 containerd[4454]: time="2025-10-13T14:35:56.933955038Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:35:57 functional-608191 containerd[4454]: time="2025-10-13T14:35:57.036305441Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:35:57 functional-608191 containerd[4454]: time="2025-10-13T14:35:57.136622167Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:35:57 functional-608191 containerd[4454]: time="2025-10-13T14:35:57.136695223Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 13 14:35:57 functional-608191 containerd[4454]: time="2025-10-13T14:35:57.137879415Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Oct 13 14:35:57 functional-608191 containerd[4454]: time="2025-10-13T14:35:57.140661533Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:35:57 functional-608191 containerd[4454]: time="2025-10-13T14:35:57.205424293Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:35:57 functional-608191 containerd[4454]: time="2025-10-13T14:35:57.310157360Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:35:57 functional-608191 containerd[4454]: time="2025-10-13T14:35:57.310251481Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Oct 13 14:36:15 functional-608191 containerd[4454]: time="2025-10-13T14:36:15.932026379Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 13 14:36:15 functional-608191 containerd[4454]: time="2025-10-13T14:36:15.935282450Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:36:15 functional-608191 containerd[4454]: time="2025-10-13T14:36:15.999306473Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:36:16 functional-608191 containerd[4454]: time="2025-10-13T14:36:16.108316493Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:36:16 functional-608191 containerd[4454]: time="2025-10-13T14:36:16.108441217Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	
	
	==> coredns [72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36858 - 65360 "HINFO IN 3005092589584362483.1560966083017627098. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026785639s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [73c62ac23dcef061db1a2cf49c532093463ee196addc24e97307ab20dcf5aeec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35999 - 64742 "HINFO IN 8601583101275943645.7322847173454900088. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031744201s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	
	
	==> describe nodes <==
	Name:               functional-608191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-608191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=functional-608191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T14_22_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 14:22:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-608191
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:36:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:35:42 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:35:42 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:35:42 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:35:42 +0000   Mon, 13 Oct 2025 14:22:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    functional-608191
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3422538a8174bd0af79b99fa0817bbd
	  System UUID:                f3422538-a817-4bd0-af79-b99fa0817bbd
	  Boot ID:                    fe252248-25b4-47d2-aaf1-51a9660115e5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7d8vj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-6qw7q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     mysql-5bb876957f-bpcvp                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    11m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-b59r9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-functional-608191                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-608191              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-608191     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-cd8b5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-608191              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wfr2r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-52xnc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeReady                13m                kubelet          Node functional-608191 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085500] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.109826] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.093375] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.131370] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.239173] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.906606] kauditd_printk_skb: 283 callbacks suppressed
	[Oct13 14:23] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.987192] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.060246] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.772942] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.876433] kauditd_printk_skb: 18 callbacks suppressed
	[  +2.906041] kauditd_printk_skb: 66 callbacks suppressed
	[Oct13 14:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.122131] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.245935] kauditd_printk_skb: 108 callbacks suppressed
	[  +4.172113] kauditd_printk_skb: 67 callbacks suppressed
	[  +8.058295] kauditd_printk_skb: 143 callbacks suppressed
	[Oct13 14:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.013503] kauditd_printk_skb: 72 callbacks suppressed
	[  +3.195165] kauditd_printk_skb: 129 callbacks suppressed
	[  +9.784794] kauditd_printk_skb: 45 callbacks suppressed
	[Oct13 14:31] kauditd_printk_skb: 38 callbacks suppressed
	[Oct13 14:35] crun[8830]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.354105] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116] <==
	{"level":"warn","ts":"2025-10-13T14:23:50.478642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.507645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.509654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.535663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.545046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.565385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.653235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43858","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T14:24:33.216994Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T14:24:33.217137Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-608191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	{"level":"error","ts":"2025-10-13T14:24:33.217254Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T14:24:33.219298Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.219358Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f8926bd555ec3d0e","current-leader-member-id":"f8926bd555ec3d0e"}
	{"level":"info","ts":"2025-10-13T14:24:33.219480Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T14:24:33.219512Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-13T14:24:33.219213Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220399Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220436Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T14:24:33.220454Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.10:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220027Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220466Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T14:24:33.220473Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.224162Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"error","ts":"2025-10-13T14:24:33.224284Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.10:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.224309Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-10-13T14:24:33.224316Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-608191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	
	
	==> etcd [c9db6877437dc31eee9418cd82cb8418bccd7b125cd05fa5d3cb86774972e283] <==
	{"level":"warn","ts":"2025-10-13T14:24:43.755356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.766429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.779720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.797671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.808981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.823706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.834745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.849532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.864251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.890234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.903686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.914674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.934259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.947959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.965331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.980932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.008421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.020181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.034953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.045765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.058431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.158722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56634","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T14:34:43.214444Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1271}
	{"level":"info","ts":"2025-10-13T14:34:43.250683Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1271,"took":"35.01677ms","hash":2707211050,"current-db-size-bytes":4263936,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2211840,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-10-13T14:34:43.250830Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2707211050,"revision":1271,"compact-revision":-1}
	
	
	==> kernel <==
	 14:36:20 up 14 min,  0 users,  load average: 0.12, 0.22, 0.21
	Linux functional-608191 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [54e018365168b8ec6573769c8afa96e9b89eb529f2d32db595e00c0895ec563b] <==
	I1013 14:24:44.955259       1 aggregator.go:171] initial CRD sync complete...
	I1013 14:24:44.955267       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 14:24:44.955275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 14:24:44.955279       1 cache.go:39] Caches are synced for autoregister controller
	I1013 14:24:44.962085       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 14:24:44.973348       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 14:24:44.983002       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 14:24:44.983039       1 policy_source.go:240] refreshing policies
	I1013 14:24:45.050478       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 14:24:45.734011       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 14:24:46.854394       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 14:24:48.297742       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 14:24:48.389233       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 14:24:48.547871       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 14:24:48.606961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 14:25:02.306998       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.126.250"}
	I1013 14:25:06.844289       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.60.71"}
	I1013 14:25:08.658755       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.255.215"}
	I1013 14:25:24.277694       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.51.235"}
	I1013 14:31:20.356432       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 14:31:20.455832       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 14:31:20.493227       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 14:31:20.671491       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.141.103"}
	I1013 14:31:20.698510       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.255.140"}
	I1013 14:34:44.903492       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-apiserver [9923b9c3b6134565e2005a755337ee1e6d742736c6e3c9f98efee81bd4d5802c] <==
	I1013 14:24:41.642829       1 options.go:263] external host was not specified, using 192.168.39.10
	I1013 14:24:41.668518       1 server.go:150] Version: v1.34.1
	I1013 14:24:41.668782       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1013 14:24:41.675050       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158] <==
	I1013 14:23:55.330332       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 14:23:55.332979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 14:23:55.334816       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 14:23:55.338137       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 14:23:55.340515       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 14:23:55.341252       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 14:23:55.342131       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 14:23:55.342984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 14:23:55.343841       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 14:23:55.345916       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 14:23:55.345995       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 14:23:55.347514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 14:23:55.351800       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 14:23:55.354174       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 14:23:55.354237       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 14:23:55.365427       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 14:23:55.365646       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 14:23:55.365690       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 14:23:55.366053       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 14:23:55.367408       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 14:23:55.368689       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 14:23:55.368714       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 14:23:55.369361       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 14:23:55.369864       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-608191"
	I1013 14:23:55.370627       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-controller-manager [e3f11c67de677fc441824afcbe3a763614b71997830a304ba906478e55265073] <==
	I1013 14:24:48.251970       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 14:24:48.257247       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 14:24:48.264697       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 14:24:48.268125       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 14:24:48.270387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 14:24:48.277540       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 14:24:48.281492       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 14:24:48.281911       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 14:24:48.282455       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-608191"
	I1013 14:24:48.283532       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 14:24:48.285291       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 14:24:48.285421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 14:24:48.286191       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 14:24:48.286359       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 14:24:48.286219       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 14:24:48.287912       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 14:24:48.298773       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	E1013 14:31:20.470285       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.482663       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.490869       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.498479       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.498866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.511610       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.511730       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.518744       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46] <==
	I1013 14:23:31.892503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 14:23:31.993145       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 14:23:31.993192       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.10"]
	E1013 14:23:31.993261       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 14:23:32.032777       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 14:23:32.032888       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 14:23:32.032925       1 server_linux.go:132] "Using iptables Proxier"
	I1013 14:23:32.044710       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 14:23:32.045212       1 server.go:527] "Version info" version="v1.34.1"
	I1013 14:23:32.045242       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:23:32.050189       1 config.go:200] "Starting service config controller"
	I1013 14:23:32.050219       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 14:23:32.050262       1 config.go:309] "Starting node config controller"
	I1013 14:23:32.050283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 14:23:32.050289       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 14:23:32.050702       1 config.go:106] "Starting endpoint slice config controller"
	I1013 14:23:32.050711       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 14:23:32.050725       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 14:23:32.050728       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 14:23:32.151068       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 14:23:32.151213       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 14:23:32.152939       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b3d48b09ac4ab7f97ae8dd7256135561a415508f359989ac4035b756c0b49b56] <==
	I1013 14:24:34.497361       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 14:24:36.901830       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 14:24:36.901992       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.10"]
	E1013 14:24:36.902089       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 14:24:36.962936       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 14:24:36.963219       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 14:24:36.963260       1 server_linux.go:132] "Using iptables Proxier"
	I1013 14:24:36.979965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 14:24:36.982117       1 server.go:527] "Version info" version="v1.34.1"
	I1013 14:24:36.982140       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:24:36.987101       1 config.go:200] "Starting service config controller"
	I1013 14:24:36.987189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 14:24:36.987210       1 config.go:106] "Starting endpoint slice config controller"
	I1013 14:24:36.987213       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 14:24:36.987227       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 14:24:36.987230       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 14:24:36.989952       1 config.go:309] "Starting node config controller"
	I1013 14:24:36.989984       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 14:24:36.989991       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 14:24:37.087813       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 14:24:37.087864       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 14:24:37.087892       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18] <==
	I1013 14:23:52.516984       1 serving.go:386] Generated self-signed cert in-memory
	I1013 14:23:53.392891       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 14:23:53.393645       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:23:53.416434       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 14:23:53.416479       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 14:23:53.416526       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:23:53.416539       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:23:53.416616       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.416626       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.426367       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 14:23:53.427869       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 14:23:53.517412       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.517510       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 14:23:53.522735       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:23.014501       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 14:24:23.014800       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 14:24:23.014930       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 14:24:23.015015       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:23.015038       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1013 14:24:23.015060       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:24:23.016307       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 14:24:23.016453       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [552b6794b2ecff0f2c2558459d0aa52965219db398dc9269aade313c2bb7c25e] <==
	I1013 14:24:42.686856       1 serving.go:386] Generated self-signed cert in-memory
	W1013 14:24:44.871016       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 14:24:44.871060       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 14:24:44.871069       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 14:24:44.871075       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 14:24:44.971082       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 14:24:44.973132       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:24:44.980825       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:44.980854       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:44.981656       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 14:24:44.981718       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 14:24:45.083704       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 14:35:50 functional-608191 kubelet[5339]: E1013 14:35:50.929825    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab
18f552e0"
	Oct 13 14:35:56 functional-608191 kubelet[5339]: E1013 14:35:56.131456    5339 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 13 14:35:56 functional-608191 kubelet[5339]: E1013 14:35:56.131516    5339 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 13 14:35:56 functional-608191 kubelet[5339]: E1013 14:35:56.131675    5339 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-6qw7q_default(1804e076-c32c-4353-bff8-6c40d2b36a56): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:35:56 functional-608191 kubelet[5339]: E1013 14:35:56.131717    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:35:57 functional-608191 kubelet[5339]: E1013 14:35:57.137033    5339 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 13 14:35:57 functional-608191 kubelet[5339]: E1013 14:35:57.137100    5339 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 13 14:35:57 functional-608191 kubelet[5339]: E1013 14:35:57.137314    5339 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(e9c2282b-16f1-4201-a7d5-96801043f1ec): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:35:57 functional-608191 kubelet[5339]: E1013 14:35:57.137348    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:35:57 functional-608191 kubelet[5339]: E1013 14:35:57.310624    5339 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 13 14:35:57 functional-608191 kubelet[5339]: E1013 14:35:57.310674    5339 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 13 14:35:57 functional-608191 kubelet[5339]: E1013 14:35:57.310772    5339 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-bpcvp_default(7939308f-4ee2-4691-9165-79aacfa8e749): ErrImagePull: failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:35:57 functional-608191 kubelet[5339]: E1013 14:35:57.310809    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:35:58 functional-608191 kubelet[5339]: E1013 14:35:58.929286    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-52xnc" podUID="7cb8a23d-7dba-44b2-b365-47a135ee0605"
	Oct 13 14:36:00 functional-608191 kubelet[5339]: E1013 14:36:00.928822    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:36:05 functional-608191 kubelet[5339]: E1013 14:36:05.929723    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab
18f552e0"
	Oct 13 14:36:08 functional-608191 kubelet[5339]: E1013 14:36:08.928697    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:36:09 functional-608191 kubelet[5339]: E1013 14:36:09.929958    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:36:10 functional-608191 kubelet[5339]: E1013 14:36:10.928316    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:36:11 functional-608191 kubelet[5339]: E1013 14:36:11.930725    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-52xnc" podUID="7cb8a23d-7dba-44b2-b365-47a135ee0605"
	Oct 13 14:36:16 functional-608191 kubelet[5339]: E1013 14:36:16.108694    5339 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 13 14:36:16 functional-608191 kubelet[5339]: E1013 14:36:16.108764    5339 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 13 14:36:16 functional-608191 kubelet[5339]: E1013 14:36:16.108986    5339 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-7d8vj_default(57a285cb-fa31-4321-96bf-bbbd20c61bc2): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:36:16 functional-608191 kubelet[5339]: E1013 14:36:16.109033    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:36:19 functional-608191 kubelet[5339]: E1013 14:36:19.929976    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab
18f552e0"
	
	
	==> storage-provisioner [0bdcff79b6f2eb18fd6df3944342b3f5a2cf125d450367aeaefda23398799bad] <==
	W1013 14:35:56.310363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:58.314738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:58.321364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:00.325481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:00.333346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:02.337717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:02.343783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:04.347225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:04.356761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:06.361302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:06.372228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:08.375764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:08.382624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:10.387747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:10.399511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:12.404082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:12.410748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:14.413791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:14.422831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:16.427436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:16.432501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:18.437110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:18.442525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:20.446408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:36:20.456904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [19906e68c850cc4d2665f6dca007cff3878b00054b2f9e7752b01a49703c8a5b] <==
	I1013 14:24:35.231238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 14:24:35.233267       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-608191 -n functional-608191
helpers_test.go:269: (dbg) Run:  kubectl --context functional-608191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc: exit status 1 (107.609222ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:09 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://6b3815b3d85db29741068c9a9b97514906bd1ef352cdf42ca5d2734f39a724e6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 13 Oct 2025 14:25:11 +0000
	      Finished:     Mon, 13 Oct 2025 14:25:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wpkbq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wpkbq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11m   default-scheduler  Successfully assigned default/busybox-mount to functional-608191
	  Normal  Pulling    11m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.486s (1.486s including waiting). Image size: 2395207 bytes.
	  Normal  Created    11m   kubelet            Created container: mount-munger
	  Normal  Started    11m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7d8vj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gctw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6gctw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7d8vj to functional-608191
	  Normal   Pulling    8m (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m59s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m59s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    57s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     57s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-6qw7q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgfsd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cgfsd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6qw7q to functional-608191
	  Normal   Pulling    8m16s (x5 over 11m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     8m15s (x5 over 11m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m15s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    64s (x43 over 11m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     64s (x43 over 11m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-bpcvp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:06 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vtwds (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vtwds:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-bpcvp to functional-608191
	  Normal   Pulling    8m28s (x5 over 11m)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     8m27s (x5 over 11m)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m27s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    72s (x42 over 11m)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     72s (x42 over 11m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:14 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kdqfp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kdqfp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/sp-pod to functional-608191
	  Normal   Pulling    8m14s (x5 over 11m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     8m13s (x5 over 11m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m13s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    63s (x42 over 11m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     63s (x42 over 11m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wfr2r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-52xnc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-608191 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-608191 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-6qw7q" [1804e076-c32c-4353-bff8-6c40d2b36a56] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-608191 -n functional-608191
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-13 14:35:08.990195284 +0000 UTC m=+2399.930753657
functional_test.go:1645: (dbg) Run:  kubectl --context functional-608191 describe po hello-node-connect-7d85dfc575-6qw7q -n default
functional_test.go:1645: (dbg) kubectl --context functional-608191 describe po hello-node-connect-7d85dfc575-6qw7q -n default:
Name:             hello-node-connect-7d85dfc575-6qw7q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-608191/192.168.39.10
Start Time:       Mon, 13 Oct 2025 14:25:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgfsd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cgfsd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6qw7q to functional-608191
Normal   Pulling    7m4s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m3s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-608191 logs hello-node-connect-7d85dfc575-6qw7q -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-608191 logs hello-node-connect-7d85dfc575-6qw7q -n default: exit status 1 (72.814587ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6qw7q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-608191 logs hello-node-connect-7d85dfc575-6qw7q -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-608191 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-6qw7q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-608191/192.168.39.10
Start Time:       Mon, 13 Oct 2025 14:25:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgfsd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cgfsd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6qw7q to functional-608191
Normal   Pulling    7m4s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m3s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-608191 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-608191 logs -l app=hello-node-connect: exit status 1 (81.287872ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6qw7q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-608191 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-608191 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.255.215
IPs:                      10.111.255.215
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31037/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-608191 -n functional-608191
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 logs -n 25: (1.878101369s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                                ARGS                                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-608191 ssh sudo umount -f /mount-9p                                                                                                                     │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ mount     │ -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount3 --alsologtostderr -v=1                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ ssh       │ functional-608191 ssh findmnt -T /mount1                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ mount     │ -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount1 --alsologtostderr -v=1                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ mount     │ -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount2 --alsologtostderr -v=1                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ ssh       │ functional-608191 ssh findmnt -T /mount1                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ ssh       │ functional-608191 ssh findmnt -T /mount2                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ ssh       │ functional-608191 ssh findmnt -T /mount3                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ mount     │ -p functional-608191 --kill=true                                                                                                                                   │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ image     │ functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image save kicbase/echo-server:functional-608191 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image rm kicbase/echo-server:functional-608191 --alsologtostderr                                                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image save --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ start     │ -p functional-608191 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                          │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	│ start     │ -p functional-608191 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                          │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	│ start     │ -p functional-608191 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                    │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-608191 --alsologtostderr -v=1                                                                                                     │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 14:31:19
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 14:31:19.291613 1831942 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:31:19.291999 1831942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:31:19.292017 1831942 out.go:374] Setting ErrFile to fd 2...
	I1013 14:31:19.292025 1831942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:31:19.292396 1831942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 14:31:19.293045 1831942 out.go:368] Setting JSON to false
	I1013 14:31:19.294312 1831942 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":22427,"bootTime":1760343452,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 14:31:19.294428 1831942 start.go:141] virtualization: kvm guest
	I1013 14:31:19.296444 1831942 out.go:179] * [functional-608191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 14:31:19.297978 1831942 notify.go:220] Checking for updates...
	I1013 14:31:19.297983 1831942 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 14:31:19.299274 1831942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 14:31:19.300464 1831942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 14:31:19.301569 1831942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 14:31:19.302616 1831942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 14:31:19.303778 1831942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 14:31:19.305317 1831942 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 14:31:19.305931 1831942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:31:19.305984 1831942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:31:19.320114 1831942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I1013 14:31:19.320672 1831942 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:31:19.321379 1831942 main.go:141] libmachine: Using API Version  1
	I1013 14:31:19.321408 1831942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:31:19.321835 1831942 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:31:19.322029 1831942 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:31:19.322314 1831942 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 14:31:19.322636 1831942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:31:19.322674 1831942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:31:19.337144 1831942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I1013 14:31:19.337704 1831942 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:31:19.338258 1831942 main.go:141] libmachine: Using API Version  1
	I1013 14:31:19.338283 1831942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:31:19.338647 1831942 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:31:19.338878 1831942 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:31:19.371631 1831942 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 14:31:19.373087 1831942 start.go:305] selected driver: kvm2
	I1013 14:31:19.373106 1831942 start.go:925] validating driver "kvm2" against &{Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:31:19.373215 1831942 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 14:31:19.374294 1831942 cni.go:84] Creating CNI manager for ""
	I1013 14:31:19.374351 1831942 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 14:31:19.374397 1831942 start.go:349] cluster config:
	{Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:31:19.376483 1831942 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b3815b3d85db       56cc512116c8f       9 minutes ago       Exited              mount-munger              0                   e1b3239f98d8c       busybox-mount
	54e018365168b       c3994bc696102       10 minutes ago      Running             kube-apiserver            1                   ae5cac3c5f135       kube-apiserver-functional-608191
	73c62ac23dcef       52546a367cc9e       10 minutes ago      Running             coredns                   2                   79d79fc021a2c       coredns-66bc5c9577-b59r9
	0bdcff79b6f2e       6e38f40d628db       10 minutes ago      Running             storage-provisioner       3                   31e2b1fefe43d       storage-provisioner
	9923b9c3b6134       c3994bc696102       10 minutes ago      Exited              kube-apiserver            0                   ae5cac3c5f135       kube-apiserver-functional-608191
	e3f11c67de677       c80c8dbafe7dd       10 minutes ago      Running             kube-controller-manager   2                   661659159fd35       kube-controller-manager-functional-608191
	552b6794b2ecf       7dd6aaa1717ab       10 minutes ago      Running             kube-scheduler            2                   d8c82bf329c20       kube-scheduler-functional-608191
	19906e68c850c       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       2                   31e2b1fefe43d       storage-provisioner
	b3d48b09ac4ab       fc25172553d79       10 minutes ago      Running             kube-proxy                2                   cccbb832d47ca       kube-proxy-cd8b5
	c9db6877437dc       5f1f5298c888d       10 minutes ago      Running             etcd                      2                   1136f8cb2bfda       etcd-functional-608191
	ccd1d671f4ad2       c80c8dbafe7dd       11 minutes ago      Exited              kube-controller-manager   1                   661659159fd35       kube-controller-manager-functional-608191
	20139c80c2b89       7dd6aaa1717ab       11 minutes ago      Exited              kube-scheduler            1                   d8c82bf329c20       kube-scheduler-functional-608191
	0ff2c0af6db42       5f1f5298c888d       11 minutes ago      Exited              etcd                      1                   1136f8cb2bfda       etcd-functional-608191
	242b510b56dc9       fc25172553d79       11 minutes ago      Exited              kube-proxy                1                   cccbb832d47ca       kube-proxy-cd8b5
	72508a8901416       52546a367cc9e       11 minutes ago      Exited              coredns                   1                   79d79fc021a2c       coredns-66bc5c9577-b59r9
	
	
	==> containerd <==
	Oct 13 14:32:02 functional-608191 containerd[4454]: time="2025-10-13T14:32:02.931671465Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 13 14:32:02 functional-608191 containerd[4454]: time="2025-10-13T14:32:02.935992878Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:03 functional-608191 containerd[4454]: time="2025-10-13T14:32:03.018444095Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:03 functional-608191 containerd[4454]: time="2025-10-13T14:32:03.119633468Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:32:03 functional-608191 containerd[4454]: time="2025-10-13T14:32:03.119762092Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 13 14:32:42 functional-608191 containerd[4454]: time="2025-10-13T14:32:42.930835811Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 14:32:42 functional-608191 containerd[4454]: time="2025-10-13T14:32:42.935019289Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:43 functional-608191 containerd[4454]: time="2025-10-13T14:32:43.002007856Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:43 functional-608191 containerd[4454]: time="2025-10-13T14:32:43.099670214Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:32:43 functional-608191 containerd[4454]: time="2025-10-13T14:32:43.099737744Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	Oct 13 14:32:55 functional-608191 containerd[4454]: time="2025-10-13T14:32:55.930848465Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 13 14:32:55 functional-608191 containerd[4454]: time="2025-10-13T14:32:55.934068289Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:55 functional-608191 containerd[4454]: time="2025-10-13T14:32:55.997808775Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:56 functional-608191 containerd[4454]: time="2025-10-13T14:32:56.108129753Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:32:56 functional-608191 containerd[4454]: time="2025-10-13T14:32:56.108256224Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 13 14:34:11 functional-608191 containerd[4454]: time="2025-10-13T14:34:11.930086814Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 14:34:11 functional-608191 containerd[4454]: time="2025-10-13T14:34:11.933230811Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:34:12 functional-608191 containerd[4454]: time="2025-10-13T14:34:12.008011644Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:34:12 functional-608191 containerd[4454]: time="2025-10-13T14:34:12.107862083Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:34:12 functional-608191 containerd[4454]: time="2025-10-13T14:34:12.107946833Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 13 14:34:21 functional-608191 containerd[4454]: time="2025-10-13T14:34:21.929779525Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 13 14:34:21 functional-608191 containerd[4454]: time="2025-10-13T14:34:21.933836309Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:34:22 functional-608191 containerd[4454]: time="2025-10-13T14:34:22.021537437Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:34:22 functional-608191 containerd[4454]: time="2025-10-13T14:34:22.117503535Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:34:22 functional-608191 containerd[4454]: time="2025-10-13T14:34:22.117693342Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	
	
	==> coredns [72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36858 - 65360 "HINFO IN 3005092589584362483.1560966083017627098. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026785639s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [73c62ac23dcef061db1a2cf49c532093463ee196addc24e97307ab20dcf5aeec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35999 - 64742 "HINFO IN 8601583101275943645.7322847173454900088. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031744201s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	
	
	==> describe nodes <==
	Name:               functional-608191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-608191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=functional-608191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T14_22_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 14:22:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-608191
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:35:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    functional-608191
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3422538a8174bd0af79b99fa0817bbd
	  System UUID:                f3422538-a817-4bd0-af79-b99fa0817bbd
	  Boot ID:                    fe252248-25b4-47d2-aaf1-51a9660115e5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7d8vj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-6qw7q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-bpcvp                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 coredns-66bc5c9577-b59r9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-608191                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-608191              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-608191     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cd8b5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-608191              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wfr2r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-52xnc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-608191 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	
	
	==> dmesg <==
	[  +1.179092] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085500] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.109826] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.093375] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.131370] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.239173] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.906606] kauditd_printk_skb: 283 callbacks suppressed
	[Oct13 14:23] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.987192] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.060246] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.772942] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.876433] kauditd_printk_skb: 18 callbacks suppressed
	[  +2.906041] kauditd_printk_skb: 66 callbacks suppressed
	[Oct13 14:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.122131] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.245935] kauditd_printk_skb: 108 callbacks suppressed
	[  +4.172113] kauditd_printk_skb: 67 callbacks suppressed
	[  +8.058295] kauditd_printk_skb: 143 callbacks suppressed
	[Oct13 14:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.013503] kauditd_printk_skb: 72 callbacks suppressed
	[  +3.195165] kauditd_printk_skb: 129 callbacks suppressed
	[  +9.784794] kauditd_printk_skb: 45 callbacks suppressed
	[Oct13 14:31] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116] <==
	{"level":"warn","ts":"2025-10-13T14:23:50.478642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.507645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.509654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.535663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.545046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.565385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.653235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43858","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T14:24:33.216994Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T14:24:33.217137Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-608191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	{"level":"error","ts":"2025-10-13T14:24:33.217254Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T14:24:33.219298Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.219358Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f8926bd555ec3d0e","current-leader-member-id":"f8926bd555ec3d0e"}
	{"level":"info","ts":"2025-10-13T14:24:33.219480Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T14:24:33.219512Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-13T14:24:33.219213Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220399Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220436Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T14:24:33.220454Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.10:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220027Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220466Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T14:24:33.220473Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.224162Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"error","ts":"2025-10-13T14:24:33.224284Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.10:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.224309Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-10-13T14:24:33.224316Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-608191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	
	
	==> etcd [c9db6877437dc31eee9418cd82cb8418bccd7b125cd05fa5d3cb86774972e283] <==
	{"level":"warn","ts":"2025-10-13T14:24:43.755356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.766429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.779720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.797671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.808981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.823706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.834745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.849532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.864251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.890234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.903686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.914674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.934259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.947959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.965331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.980932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.008421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.020181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.034953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.045765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.058431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.158722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56634","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T14:34:43.214444Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1271}
	{"level":"info","ts":"2025-10-13T14:34:43.250683Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1271,"took":"35.01677ms","hash":2707211050,"current-db-size-bytes":4263936,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2211840,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-10-13T14:34:43.250830Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2707211050,"revision":1271,"compact-revision":-1}
	
	
	==> kernel <==
	 14:35:10 up 13 min,  0 users,  load average: 0.19, 0.25, 0.22
	Linux functional-608191 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [54e018365168b8ec6573769c8afa96e9b89eb529f2d32db595e00c0895ec563b] <==
	I1013 14:24:44.955259       1 aggregator.go:171] initial CRD sync complete...
	I1013 14:24:44.955267       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 14:24:44.955275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 14:24:44.955279       1 cache.go:39] Caches are synced for autoregister controller
	I1013 14:24:44.962085       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 14:24:44.973348       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 14:24:44.983002       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 14:24:44.983039       1 policy_source.go:240] refreshing policies
	I1013 14:24:45.050478       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 14:24:45.734011       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 14:24:46.854394       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 14:24:48.297742       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 14:24:48.389233       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 14:24:48.547871       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 14:24:48.606961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 14:25:02.306998       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.126.250"}
	I1013 14:25:06.844289       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.60.71"}
	I1013 14:25:08.658755       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.255.215"}
	I1013 14:25:24.277694       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.51.235"}
	I1013 14:31:20.356432       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 14:31:20.455832       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 14:31:20.493227       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 14:31:20.671491       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.141.103"}
	I1013 14:31:20.698510       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.255.140"}
	I1013 14:34:44.903492       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-apiserver [9923b9c3b6134565e2005a755337ee1e6d742736c6e3c9f98efee81bd4d5802c] <==
	I1013 14:24:41.642829       1 options.go:263] external host was not specified, using 192.168.39.10
	I1013 14:24:41.668518       1 server.go:150] Version: v1.34.1
	I1013 14:24:41.668782       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1013 14:24:41.675050       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158] <==
	I1013 14:23:55.330332       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 14:23:55.332979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 14:23:55.334816       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 14:23:55.338137       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 14:23:55.340515       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 14:23:55.341252       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 14:23:55.342131       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 14:23:55.342984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 14:23:55.343841       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 14:23:55.345916       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 14:23:55.345995       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 14:23:55.347514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 14:23:55.351800       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 14:23:55.354174       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 14:23:55.354237       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 14:23:55.365427       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 14:23:55.365646       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 14:23:55.365690       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 14:23:55.366053       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 14:23:55.367408       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 14:23:55.368689       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 14:23:55.368714       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 14:23:55.369361       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 14:23:55.369864       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-608191"
	I1013 14:23:55.370627       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-controller-manager [e3f11c67de677fc441824afcbe3a763614b71997830a304ba906478e55265073] <==
	I1013 14:24:48.251970       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 14:24:48.257247       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 14:24:48.264697       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 14:24:48.268125       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 14:24:48.270387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 14:24:48.277540       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 14:24:48.281492       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 14:24:48.281911       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 14:24:48.282455       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-608191"
	I1013 14:24:48.283532       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 14:24:48.285291       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 14:24:48.285421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 14:24:48.286191       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 14:24:48.286359       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 14:24:48.286219       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 14:24:48.287912       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 14:24:48.298773       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	E1013 14:31:20.470285       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.482663       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.490869       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.498479       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.498866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.511610       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.511730       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.518744       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46] <==
	I1013 14:23:31.892503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 14:23:31.993145       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 14:23:31.993192       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.10"]
	E1013 14:23:31.993261       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 14:23:32.032777       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 14:23:32.032888       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 14:23:32.032925       1 server_linux.go:132] "Using iptables Proxier"
	I1013 14:23:32.044710       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 14:23:32.045212       1 server.go:527] "Version info" version="v1.34.1"
	I1013 14:23:32.045242       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:23:32.050189       1 config.go:200] "Starting service config controller"
	I1013 14:23:32.050219       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 14:23:32.050262       1 config.go:309] "Starting node config controller"
	I1013 14:23:32.050283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 14:23:32.050289       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 14:23:32.050702       1 config.go:106] "Starting endpoint slice config controller"
	I1013 14:23:32.050711       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 14:23:32.050725       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 14:23:32.050728       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 14:23:32.151068       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 14:23:32.151213       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 14:23:32.152939       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b3d48b09ac4ab7f97ae8dd7256135561a415508f359989ac4035b756c0b49b56] <==
	I1013 14:24:34.497361       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 14:24:36.901830       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 14:24:36.901992       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.10"]
	E1013 14:24:36.902089       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 14:24:36.962936       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 14:24:36.963219       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 14:24:36.963260       1 server_linux.go:132] "Using iptables Proxier"
	I1013 14:24:36.979965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 14:24:36.982117       1 server.go:527] "Version info" version="v1.34.1"
	I1013 14:24:36.982140       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:24:36.987101       1 config.go:200] "Starting service config controller"
	I1013 14:24:36.987189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 14:24:36.987210       1 config.go:106] "Starting endpoint slice config controller"
	I1013 14:24:36.987213       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 14:24:36.987227       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 14:24:36.987230       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 14:24:36.989952       1 config.go:309] "Starting node config controller"
	I1013 14:24:36.989984       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 14:24:36.989991       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 14:24:37.087813       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 14:24:37.087864       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 14:24:37.087892       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18] <==
	I1013 14:23:52.516984       1 serving.go:386] Generated self-signed cert in-memory
	I1013 14:23:53.392891       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 14:23:53.393645       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:23:53.416434       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 14:23:53.416479       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 14:23:53.416526       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:23:53.416539       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:23:53.416616       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.416626       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.426367       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 14:23:53.427869       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 14:23:53.517412       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.517510       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 14:23:53.522735       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:23.014501       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 14:24:23.014800       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 14:24:23.014930       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 14:24:23.015015       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:23.015038       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1013 14:24:23.015060       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:24:23.016307       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 14:24:23.016453       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [552b6794b2ecff0f2c2558459d0aa52965219db398dc9269aade313c2bb7c25e] <==
	I1013 14:24:42.686856       1 serving.go:386] Generated self-signed cert in-memory
	W1013 14:24:44.871016       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 14:24:44.871060       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 14:24:44.871069       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 14:24:44.871075       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 14:24:44.971082       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 14:24:44.973132       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:24:44.980825       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:44.980854       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:44.981656       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 14:24:44.981718       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 14:24:45.083704       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 14:34:22 functional-608191 kubelet[5339]: E1013 14:34:22.118237    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab18f552e0"
	Oct 13 14:34:22 functional-608191 kubelet[5339]: E1013 14:34:22.928621    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:34:23 functional-608191 kubelet[5339]: E1013 14:34:23.928778    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:34:24 functional-608191 kubelet[5339]: E1013 14:34:24.929972    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-52xnc" podUID="7cb8a23d-7dba-44b2-b365-47a135ee0605"
	Oct 13 14:34:25 functional-608191 kubelet[5339]: E1013 14:34:25.931084    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:34:26 functional-608191 kubelet[5339]: E1013 14:34:26.928699    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:34:35 functional-608191 kubelet[5339]: E1013 14:34:35.932119    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:34:35 functional-608191 kubelet[5339]: E1013 14:34:35.933326    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:34:36 functional-608191 kubelet[5339]: E1013 14:34:36.928982    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:34:36 functional-608191 kubelet[5339]: E1013 14:34:36.931329    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab
18f552e0"
	Oct 13 14:34:38 functional-608191 kubelet[5339]: E1013 14:34:38.930915    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-52xnc" podUID="7cb8a23d-7dba-44b2-b365-47a135ee0605"
	Oct 13 14:34:39 functional-608191 kubelet[5339]: E1013 14:34:39.929086    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:34:46 functional-608191 kubelet[5339]: E1013 14:34:46.929195    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:34:47 functional-608191 kubelet[5339]: E1013 14:34:47.928763    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:34:48 functional-608191 kubelet[5339]: E1013 14:34:48.929304    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab
18f552e0"
	Oct 13 14:34:50 functional-608191 kubelet[5339]: E1013 14:34:50.929375    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:34:52 functional-608191 kubelet[5339]: E1013 14:34:52.928479    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:34:53 functional-608191 kubelet[5339]: E1013 14:34:53.931976    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-52xnc" podUID="7cb8a23d-7dba-44b2-b365-47a135ee0605"
	Oct 13 14:34:58 functional-608191 kubelet[5339]: E1013 14:34:58.928672    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:34:58 functional-608191 kubelet[5339]: E1013 14:34:58.930349    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:35:02 functional-608191 kubelet[5339]: E1013 14:35:02.930289    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab
18f552e0"
	Oct 13 14:35:04 functional-608191 kubelet[5339]: E1013 14:35:04.928352    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:35:05 functional-608191 kubelet[5339]: E1013 14:35:05.928817    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:35:06 functional-608191 kubelet[5339]: E1013 14:35:06.930180    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-52xnc" podUID="7cb8a23d-7dba-44b2-b365-47a135ee0605"
	Oct 13 14:35:09 functional-608191 kubelet[5339]: E1013 14:35:09.929072    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	
	
	==> storage-provisioner [0bdcff79b6f2eb18fd6df3944342b3f5a2cf125d450367aeaefda23398799bad] <==
	W1013 14:34:45.900443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:47.905816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:47.911914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:49.915993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:49.925079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:51.930662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:51.937634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:53.943510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:53.951341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:55.955382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:55.964879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:57.968455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:57.974874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:59.979211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:59.989103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:01.992631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:01.998388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:04.003511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:04.009683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:06.013113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:06.018885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:08.023100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:08.030794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:10.038013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:10.052740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [19906e68c850cc4d2665f6dca007cff3878b00054b2f9e7752b01a49703c8a5b] <==
	I1013 14:24:35.231238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 14:24:35.233267       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-608191 -n functional-608191
helpers_test.go:269: (dbg) Run:  kubectl --context functional-608191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc: exit status 1 (114.007464ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:09 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://6b3815b3d85db29741068c9a9b97514906bd1ef352cdf42ca5d2734f39a724e6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 13 Oct 2025 14:25:11 +0000
	      Finished:     Mon, 13 Oct 2025 14:25:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wpkbq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wpkbq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-608191
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.486s (1.486s including waiting). Image size: 2395207 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7d8vj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gctw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6gctw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7d8vj to functional-608191
	  Normal   Pulling    6m51s (x5 over 9m48s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m50s (x5 over 9m48s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m50s (x5 over 9m48s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m44s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m44s (x21 over 9m47s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-6qw7q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgfsd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cgfsd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6qw7q to functional-608191
	  Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m50s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m50s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-bpcvp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:06 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vtwds (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vtwds:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-bpcvp to functional-608191
	  Normal   Pulling    7m19s (x5 over 10m)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m18s (x5 over 10m)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m18s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x42 over 10m)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3s (x42 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:14 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kdqfp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kdqfp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m57s                   default-scheduler  Successfully assigned default/sp-pod to functional-608191
	  Normal   Pulling    7m5s (x5 over 9m58s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m49s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m36s (x21 over 9m57s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wfr2r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-52xnc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (370.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [316b9a37-6b1a-4349-b8f1-641507a4c795] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.008492531s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-608191 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-608191 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-608191 get pvc myclaim -o=json
I1013 14:25:12.489305 1814927 retry.go:31] will retry after 1.435228119s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3fcf5066-d524-42ac-ab15-ca569f27ae11 ResourceVersion:812 Generation:0 CreationTimestamp:2025-10-13 14:25:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-3fcf5066-d524-42ac-ab15-ca569f27ae11 StorageClassName:0xc001b366e0 VolumeMode:0xc001b366f0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-608191 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-608191 apply -f testdata/storage-provisioner/pod.yaml
I1013 14:25:14.144351 1814927 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e9c2282b-16f1-4201-a7d5-96801043f1ec] Pending
helpers_test.go:352: "sp-pod" [e9c2282b-16f1-4201-a7d5-96801043f1ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-608191 -n functional-608191
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-13 14:31:14.429094877 +0000 UTC m=+2165.369653250
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-608191 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-608191 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-608191/192.168.39.10
Start Time:       Mon, 13 Oct 2025 14:25:14 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kdqfp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-kdqfp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/sp-pod to functional-608191
Normal   Pulling    3m7s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m6s (x5 over 6m)     kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m6s (x5 over 6m)     kubelet            Error: ErrImagePull
Warning  Failed     51s (x20 over 5m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    38s (x21 over 5m59s)  kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-608191 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-608191 logs sp-pod -n default: exit status 1 (102.164357ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-608191 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-608191 -n functional-608191
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 logs -n 25: (1.58002753s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ -p functional-608191 /tmp/TestFunctionalparallelMountCmdspecific-port931733410/001:/mount-9p --alsologtostderr -v=1 --port 46464                                   │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ ssh     │ functional-608191 ssh findmnt -T /mount-9p | grep 9p                                                                                                               │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ ssh     │ functional-608191 ssh findmnt -T /mount-9p | grep 9p                                                                                                               │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ ssh     │ functional-608191 ssh -- ls -la /mount-9p                                                                                                                          │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ ssh     │ functional-608191 ssh sudo umount -f /mount-9p                                                                                                                     │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ mount   │ -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount3 --alsologtostderr -v=1                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ ssh     │ functional-608191 ssh findmnt -T /mount1                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ mount   │ -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount1 --alsologtostderr -v=1                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ mount   │ -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount2 --alsologtostderr -v=1                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ ssh     │ functional-608191 ssh findmnt -T /mount1                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ ssh     │ functional-608191 ssh findmnt -T /mount2                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ ssh     │ functional-608191 ssh findmnt -T /mount3                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ mount   │ -p functional-608191 --kill=true                                                                                                                                   │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ image   │ functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image save kicbase/echo-server:functional-608191 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image rm kicbase/echo-server:functional-608191 --alsologtostderr                                                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image   │ functional-608191 image save --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 14:24:17
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 14:24:17.835009 1828239 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:24:17.835109 1828239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:24:17.835112 1828239 out.go:374] Setting ErrFile to fd 2...
	I1013 14:24:17.835115 1828239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:24:17.835300 1828239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 14:24:17.835786 1828239 out.go:368] Setting JSON to false
	I1013 14:24:17.836757 1828239 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":22006,"bootTime":1760343452,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 14:24:17.836813 1828239 start.go:141] virtualization: kvm guest
	I1013 14:24:17.838916 1828239 out.go:179] * [functional-608191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 14:24:17.840918 1828239 notify.go:220] Checking for updates...
	I1013 14:24:17.840941 1828239 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 14:24:17.842365 1828239 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 14:24:17.843990 1828239 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 14:24:17.845451 1828239 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 14:24:17.847016 1828239 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 14:24:17.848565 1828239 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 14:24:17.850490 1828239 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 14:24:17.850609 1828239 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 14:24:17.851063 1828239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:24:17.851100 1828239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:24:17.865424 1828239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I1013 14:24:17.865983 1828239 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:24:17.866559 1828239 main.go:141] libmachine: Using API Version  1
	I1013 14:24:17.866580 1828239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:24:17.866961 1828239 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:24:17.867188 1828239 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:24:17.899007 1828239 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 14:24:17.900479 1828239 start.go:305] selected driver: kvm2
	I1013 14:24:17.900488 1828239 start.go:925] validating driver "kvm2" against &{Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:24:17.900668 1828239 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 14:24:17.901141 1828239 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 14:24:17.901221 1828239 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 14:24:17.916093 1828239 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 14:24:17.916122 1828239 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 14:24:17.930803 1828239 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 14:24:17.931900 1828239 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 14:24:17.931931 1828239 cni.go:84] Creating CNI manager for ""
	I1013 14:24:17.932016 1828239 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 14:24:17.932088 1828239 start.go:349] cluster config:
	{Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:24:17.932226 1828239 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 14:24:17.935270 1828239 out.go:179] * Starting "functional-608191" primary control-plane node in "functional-608191" cluster
	I1013 14:24:17.936608 1828239 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 14:24:17.936660 1828239 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 14:24:17.936668 1828239 cache.go:58] Caching tarball of preloaded images
	I1013 14:24:17.936787 1828239 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 14:24:17.936793 1828239 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 14:24:17.936891 1828239 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/config.json ...
	I1013 14:24:17.937100 1828239 start.go:360] acquireMachinesLock for functional-608191: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 14:24:17.937167 1828239 start.go:364] duration metric: took 47.453µs to acquireMachinesLock for "functional-608191"
	I1013 14:24:17.937192 1828239 start.go:96] Skipping create...Using existing machine configuration
	I1013 14:24:17.937196 1828239 fix.go:54] fixHost starting: 
	I1013 14:24:17.937454 1828239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:24:17.937486 1828239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:24:17.952333 1828239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37727
	I1013 14:24:17.952907 1828239 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:24:17.953499 1828239 main.go:141] libmachine: Using API Version  1
	I1013 14:24:17.953522 1828239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:24:17.954010 1828239 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:24:17.954313 1828239 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:24:17.954545 1828239 main.go:141] libmachine: (functional-608191) Calling .GetState
	I1013 14:24:17.956665 1828239 fix.go:112] recreateIfNeeded on functional-608191: state=Running err=<nil>
	W1013 14:24:17.956681 1828239 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 14:24:17.958851 1828239 out.go:252] * Updating the running kvm2 "functional-608191" VM ...
	I1013 14:24:17.958879 1828239 machine.go:93] provisionDockerMachine start ...
	I1013 14:24:17.958896 1828239 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:24:17.959205 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:17.962396 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:17.962833 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:17.962850 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:17.963045 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
	I1013 14:24:17.963244 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:17.963416 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:17.963523 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
	I1013 14:24:17.963674 1828239 main.go:141] libmachine: Using SSH client type: native
	I1013 14:24:17.963953 1828239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1013 14:24:17.963959 1828239 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 14:24:18.083310 1828239 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-608191
	
	I1013 14:24:18.083340 1828239 main.go:141] libmachine: (functional-608191) Calling .GetMachineName
	I1013 14:24:18.083667 1828239 buildroot.go:166] provisioning hostname "functional-608191"
	I1013 14:24:18.083694 1828239 main.go:141] libmachine: (functional-608191) Calling .GetMachineName
	I1013 14:24:18.083935 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:18.087333 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.087674 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:18.087700 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.087904 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
	I1013 14:24:18.088120 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:18.088273 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:18.088449 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
	I1013 14:24:18.088654 1828239 main.go:141] libmachine: Using SSH client type: native
	I1013 14:24:18.088875 1828239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1013 14:24:18.088883 1828239 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-608191 && echo "functional-608191" | sudo tee /etc/hostname
	I1013 14:24:18.223126 1828239 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-608191
	
	I1013 14:24:18.223145 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:18.226355 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.226780 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:18.226811 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.227005 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
	I1013 14:24:18.227214 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:18.227361 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:18.227503 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
	I1013 14:24:18.227702 1828239 main.go:141] libmachine: Using SSH client type: native
	I1013 14:24:18.227932 1828239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1013 14:24:18.227943 1828239 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-608191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608191/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-608191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 14:24:18.344792 1828239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 14:24:18.344822 1828239 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 14:24:18.344843 1828239 buildroot.go:174] setting up certificates
	I1013 14:24:18.344853 1828239 provision.go:84] configureAuth start
	I1013 14:24:18.344861 1828239 main.go:141] libmachine: (functional-608191) Calling .GetMachineName
	I1013 14:24:18.345249 1828239 main.go:141] libmachine: (functional-608191) Calling .GetIP
	I1013 14:24:18.348376 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.348786 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:18.348809 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.349002 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:18.351295 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.351623 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:18.351652 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.351811 1828239 provision.go:143] copyHostCerts
	I1013 14:24:18.351881 1828239 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem, removing ...
	I1013 14:24:18.351900 1828239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem
	I1013 14:24:18.352010 1828239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 14:24:18.352135 1828239 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem, removing ...
	I1013 14:24:18.352141 1828239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem
	I1013 14:24:18.352182 1828239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 14:24:18.352333 1828239 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem, removing ...
	I1013 14:24:18.352341 1828239 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem
	I1013 14:24:18.352381 1828239 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 14:24:18.352455 1828239 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.functional-608191 san=[127.0.0.1 192.168.39.10 functional-608191 localhost minikube]
	I1013 14:24:18.494585 1828239 provision.go:177] copyRemoteCerts
	I1013 14:24:18.494653 1828239 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 14:24:18.494681 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:18.497754 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.498077 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:18.498095 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.498325 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
	I1013 14:24:18.498568 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:18.498830 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
	I1013 14:24:18.499052 1828239 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
	I1013 14:24:18.588867 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 14:24:18.622546 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 14:24:18.657429 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 14:24:18.690010 1828239 provision.go:87] duration metric: took 345.141451ms to configureAuth
	I1013 14:24:18.690077 1828239 buildroot.go:189] setting minikube options for container-runtime
	I1013 14:24:18.690358 1828239 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 14:24:18.690367 1828239 machine.go:96] duration metric: took 731.482167ms to provisionDockerMachine
	I1013 14:24:18.690377 1828239 start.go:293] postStartSetup for "functional-608191" (driver="kvm2")
	I1013 14:24:18.690388 1828239 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 14:24:18.690418 1828239 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:24:18.691042 1828239 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 14:24:18.691073 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:18.693794 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.694241 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:18.694265 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.694424 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
	I1013 14:24:18.694656 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:18.694850 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
	I1013 14:24:18.695015 1828239 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
	I1013 14:24:18.783036 1828239 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 14:24:18.788221 1828239 info.go:137] Remote host: Buildroot 2025.02
	I1013 14:24:18.788250 1828239 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 14:24:18.788315 1828239 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 14:24:18.788383 1828239 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem -> 18149272.pem in /etc/ssl/certs
	I1013 14:24:18.788448 1828239 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/test/nested/copy/1814927/hosts -> hosts in /etc/test/nested/copy/1814927
	I1013 14:24:18.788480 1828239 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1814927
	I1013 14:24:18.801695 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 14:24:18.835738 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/test/nested/copy/1814927/hosts --> /etc/test/nested/copy/1814927/hosts (40 bytes)
	I1013 14:24:18.875981 1828239 start.go:296] duration metric: took 185.575594ms for postStartSetup
	I1013 14:24:18.876016 1828239 fix.go:56] duration metric: took 938.819047ms for fixHost
	I1013 14:24:18.876058 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:18.879309 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.879739 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:18.879773 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:18.879930 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
	I1013 14:24:18.880147 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:18.880310 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:18.880418 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
	I1013 14:24:18.880602 1828239 main.go:141] libmachine: Using SSH client type: native
	I1013 14:24:18.880817 1828239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1013 14:24:18.880821 1828239 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 14:24:18.998000 1828239 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760365458.995364029
	
	I1013 14:24:18.998029 1828239 fix.go:216] guest clock: 1760365458.995364029
	I1013 14:24:18.998038 1828239 fix.go:229] Guest: 2025-10-13 14:24:18.995364029 +0000 UTC Remote: 2025-10-13 14:24:18.876018329 +0000 UTC m=+1.086709500 (delta=119.3457ms)
	I1013 14:24:18.998085 1828239 fix.go:200] guest clock delta is within tolerance: 119.3457ms
	I1013 14:24:18.998092 1828239 start.go:83] releasing machines lock for "functional-608191", held for 1.060915393s
	I1013 14:24:18.998127 1828239 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:24:18.998555 1828239 main.go:141] libmachine: (functional-608191) Calling .GetIP
	I1013 14:24:19.001697 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:19.002116 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:19.002136 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:19.002324 1828239 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:24:19.002955 1828239 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:24:19.003172 1828239 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:24:19.003297 1828239 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 14:24:19.003335 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:19.003412 1828239 ssh_runner.go:195] Run: cat /version.json
	I1013 14:24:19.003433 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:19.006884 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:19.006915 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:19.007436 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:19.007467 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:19.007484 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:19.007496 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:19.007764 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
	I1013 14:24:19.007784 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
	I1013 14:24:19.007987 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:19.008012 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:19.008149 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
	I1013 14:24:19.008154 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
	I1013 14:24:19.008442 1828239 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
	I1013 14:24:19.008451 1828239 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
	I1013 14:24:19.096274 1828239 ssh_runner.go:195] Run: systemctl --version
	I1013 14:24:19.124072 1828239 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 14:24:19.131456 1828239 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 14:24:19.131516 1828239 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 14:24:19.144585 1828239 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 14:24:19.144600 1828239 start.go:495] detecting cgroup driver to use...
	I1013 14:24:19.144672 1828239 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 14:24:19.163623 1828239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 14:24:19.180388 1828239 docker.go:218] disabling cri-docker service (if available) ...
	I1013 14:24:19.180454 1828239 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 14:24:19.201745 1828239 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 14:24:19.219525 1828239 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 14:24:19.415674 1828239 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 14:24:19.591986 1828239 docker.go:234] disabling docker service ...
	I1013 14:24:19.592057 1828239 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 14:24:19.622633 1828239 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 14:24:19.641618 1828239 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 14:24:19.838083 1828239 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 14:24:20.023952 1828239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 14:24:20.042696 1828239 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 14:24:20.069519 1828239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 14:24:20.083744 1828239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 14:24:20.097555 1828239 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 14:24:20.097635 1828239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 14:24:20.110901 1828239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 14:24:20.123918 1828239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 14:24:20.136814 1828239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 14:24:20.149898 1828239 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 14:24:20.164839 1828239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 14:24:20.178954 1828239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 14:24:20.192731 1828239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 14:24:20.208289 1828239 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 14:24:20.219950 1828239 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 14:24:20.232382 1828239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 14:24:20.413748 1828239 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 14:24:20.477328 1828239 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 14:24:20.477418 1828239 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 14:24:20.483487 1828239 retry.go:31] will retry after 1.020435078s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 14:24:21.504767 1828239 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 14:24:21.511704 1828239 start.go:563] Will wait 60s for crictl version
	I1013 14:24:21.511793 1828239 ssh_runner.go:195] Run: which crictl
	I1013 14:24:21.516600 1828239 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 14:24:21.554781 1828239 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 14:24:21.554843 1828239 ssh_runner.go:195] Run: containerd --version
	I1013 14:24:21.585954 1828239 ssh_runner.go:195] Run: containerd --version
	I1013 14:24:21.619063 1828239 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 14:24:21.620821 1828239 main.go:141] libmachine: (functional-608191) Calling .GetIP
	I1013 14:24:21.624172 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:21.624635 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:21.624662 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:21.625010 1828239 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 14:24:21.632092 1828239 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1013 14:24:21.633041 1828239 kubeadm.go:883] updating cluster {Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-608191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 14:24:21.633173 1828239 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 14:24:21.633249 1828239 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 14:24:21.674164 1828239 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 14:24:21.674178 1828239 containerd.go:534] Images already preloaded, skipping extraction
	I1013 14:24:21.674232 1828239 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 14:24:21.715304 1828239 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 14:24:21.715331 1828239 cache_images.go:85] Images are preloaded, skipping loading
	I1013 14:24:21.715340 1828239 kubeadm.go:934] updating node { 192.168.39.10 8441 v1.34.1 containerd true true} ...
	I1013 14:24:21.715481 1828239 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 14:24:21.715555 1828239 ssh_runner.go:195] Run: sudo crictl info
	I1013 14:24:21.753144 1828239 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1013 14:24:21.753164 1828239 cni.go:84] Creating CNI manager for ""
	I1013 14:24:21.753173 1828239 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 14:24:21.753183 1828239 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 14:24:21.753209 1828239 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608191 NodeName:functional-608191 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletCo
nfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 14:24:21.753336 1828239 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-608191"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 14:24:21.753397 1828239 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 14:24:21.768535 1828239 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 14:24:21.768606 1828239 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 14:24:21.782369 1828239 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1013 14:24:21.809469 1828239 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 14:24:21.835034 1828239 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2083 bytes)
	I1013 14:24:21.860599 1828239 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I1013 14:24:21.867097 1828239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 14:24:22.058460 1828239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 14:24:22.078754 1828239 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191 for IP: 192.168.39.10
	I1013 14:24:22.078767 1828239 certs.go:195] generating shared ca certs ...
	I1013 14:24:22.078783 1828239 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:24:22.078965 1828239 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 14:24:22.079024 1828239 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 14:24:22.079031 1828239 certs.go:257] generating profile certs ...
	I1013 14:24:22.079117 1828239 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.key
	I1013 14:24:22.079173 1828239 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/apiserver.key.a6ce53b0
	I1013 14:24:22.079207 1828239 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/proxy-client.key
	I1013 14:24:22.079309 1828239 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem (1338 bytes)
	W1013 14:24:22.079333 1828239 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927_empty.pem, impossibly tiny 0 bytes
	I1013 14:24:22.079338 1828239 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 14:24:22.079366 1828239 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 14:24:22.079383 1828239 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 14:24:22.079411 1828239 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 14:24:22.079451 1828239 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 14:24:22.080151 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 14:24:22.118376 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 14:24:22.153310 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 14:24:22.188171 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 14:24:22.222901 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 14:24:22.264350 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 14:24:22.301536 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 14:24:22.335338 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 14:24:22.372046 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /usr/share/ca-certificates/18149272.pem (1708 bytes)
	I1013 14:24:22.406626 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 14:24:22.443161 1828239 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem --> /usr/share/ca-certificates/1814927.pem (1338 bytes)
	I1013 14:24:22.477333 1828239 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 14:24:22.501400 1828239 ssh_runner.go:195] Run: openssl version
	I1013 14:24:22.509276 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18149272.pem && ln -fs /usr/share/ca-certificates/18149272.pem /etc/ssl/certs/18149272.pem"
	I1013 14:24:22.524393 1828239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18149272.pem
	I1013 14:24:22.530166 1828239 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 14:22 /usr/share/ca-certificates/18149272.pem
	I1013 14:24:22.530225 1828239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18149272.pem
	I1013 14:24:22.538397 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18149272.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 14:24:22.553283 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 14:24:22.568964 1828239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 14:24:22.575348 1828239 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 14:24:22.575413 1828239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 14:24:22.584465 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 14:24:22.598494 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1814927.pem && ln -fs /usr/share/ca-certificates/1814927.pem /etc/ssl/certs/1814927.pem"
	I1013 14:24:22.613759 1828239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1814927.pem
	I1013 14:24:22.620572 1828239 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 14:22 /usr/share/ca-certificates/1814927.pem
	I1013 14:24:22.620695 1828239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1814927.pem
	I1013 14:24:22.629519 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1814927.pem /etc/ssl/certs/51391683.0"
	I1013 14:24:22.642781 1828239 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 14:24:22.648644 1828239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 14:24:22.656505 1828239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 14:24:22.664947 1828239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 14:24:22.673340 1828239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 14:24:22.681663 1828239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 14:24:22.690998 1828239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 14:24:22.699645 1828239 kubeadm.go:400] StartCluster: {Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-608191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStri
ng: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:24:22.699792 1828239 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 14:24:22.699896 1828239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 14:24:22.742877 1828239 cri.go:89] found id: "ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158"
	I1013 14:24:22.742899 1828239 cri.go:89] found id: "20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18"
	I1013 14:24:22.742902 1828239 cri.go:89] found id: "3756c5bf3594878b43508b81ed432be498abc5513e8de09a43db7a92ba375cc9"
	I1013 14:24:22.742904 1828239 cri.go:89] found id: "0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116"
	I1013 14:24:22.742906 1828239 cri.go:89] found id: "242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46"
	I1013 14:24:22.742908 1828239 cri.go:89] found id: "72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde"
	I1013 14:24:22.742909 1828239 cri.go:89] found id: "2fcc8135e9aa48e7874d35eebf3b8af484a70f5170287870ebcdae384ab33a9e"
	I1013 14:24:22.742911 1828239 cri.go:89] found id: "5d494529ac46e168ef401a0f3abfc4c29823c7952c5d0d603191d632e9927969"
	I1013 14:24:22.742912 1828239 cri.go:89] found id: "65404fda21399f8e2093ac9e98d3bc62a59e8b65fa508fa8ff28590c208c9bb6"
	I1013 14:24:22.742919 1828239 cri.go:89] found id: "510429c5edae2af8f2b59fcab349d442fce243b6ae5c6b43fed60e140f637139"
	I1013 14:24:22.742921 1828239 cri.go:89] found id: "aa4c26afd56a1739b265e7591b41bd3eb7f30dd93e33ac950aa707edeeea83dc"
	I1013 14:24:22.742922 1828239 cri.go:89] found id: "e9ae5c7f05c42fd2e2d30e5c94a447197d7a3520bc4c9b4abbd8e2f332510087"
	I1013 14:24:22.742924 1828239 cri.go:89] found id: "02ef80a81d8fe25469e7142216e5e6e93881f43146502a5dc0f26db3256962e6"
	I1013 14:24:22.742926 1828239 cri.go:89] found id: "a04ad0a2c2f7af9a95868a524e710f1ad11465adb6aea5b7c740e033e87bdfcc"
	I1013 14:24:22.742929 1828239 cri.go:89] found id: ""
	I1013 14:24:22.742976 1828239 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1013 14:24:22.777266 1828239 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116","pid":3393,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116/rootfs","created":"2025-10-13T14:23:38.734715056Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"1136f8cb2bfdaa0453b7a8dbda93431ea5a1bbef26116f4a4253736de60d630d","io.kubernetes.cri.sandbox-name":"etcd-functional-608191","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2cb7eee2ec43f450df95fd4bf8a31b0e"},"owner":"root"},{"ociVersion":"1.2.1","id":"1136f8cb2bfdaa0453b7a8dbda93431ea5a1bbef26116f4
a4253736de60d630d","pid":1321,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1136f8cb2bfdaa0453b7a8dbda93431ea5a1bbef26116f4a4253736de60d630d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1136f8cb2bfdaa0453b7a8dbda93431ea5a1bbef26116f4a4253736de60d630d/rootfs","created":"2025-10-13T14:22:34.240041782Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"1136f8cb2bfdaa0453b7a8dbda93431ea5a1bbef26116f4a4253736de60d630d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-608191_2cb7eee2ec43f450df95fd4bf8a31b0e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-608191","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2cb7eee2ec43f450df95fd4bf8a31b0e"},"owner":"root"},{"ociVersion":"
1.2.1","id":"20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18","pid":3763,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18/rootfs","created":"2025-10-13T14:23:52.016203262Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"d8c82bf329c20e0c5e2cae1e88c546a691f5e971f7f347786b347277b6b0db01","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-608191","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4aff0633e30644a590c2955148ff3f21"},"owner":"root"},{"ociVersion":"1.2.1","id":"242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46","pid":3221,"status":"runn
ing","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46/rootfs","created":"2025-10-13T14:23:31.767895132Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.34.1","io.kubernetes.cri.sandbox-id":"cccbb832d47ca2786a5ae7d2719aaa93d7702411df955564d81889f2284df29b","io.kubernetes.cri.sandbox-name":"kube-proxy-cd8b5","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"68c1c059-66ad-4ca6-b600-fa382f929d3f"},"owner":"root"},{"ociVersion":"1.2.1","id":"2fcc8135e9aa48e7874d35eebf3b8af484a70f5170287870ebcdae384ab33a9e","pid":3089,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fcc8135e9aa48e7874d35eebf3b8af484a70f5170287870ebcdae38
4ab33a9e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fcc8135e9aa48e7874d35eebf3b8af484a70f5170287870ebcdae384ab33a9e/rootfs","created":"2025-10-13T14:23:26.693968788Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"31e2b1fefe43df1713abcb0830c53368f55e44030d0a8f941c53fde8f1e709f3","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"316b9a37-6b1a-4349-b8f1-641507a4c795"},"owner":"root"},{"ociVersion":"1.2.1","id":"31e2b1fefe43df1713abcb0830c53368f55e44030d0a8f941c53fde8f1e709f3","pid":2283,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/31e2b1fefe43df1713abcb0830c53368f55e44030d0a8f941c53fde8f1e709f3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/31e2b1fefe43df1713abcb0830c5336
8f55e44030d0a8f941c53fde8f1e709f3/rootfs","created":"2025-10-13T14:22:47.627980943Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"31e2b1fefe43df1713abcb0830c53368f55e44030d0a8f941c53fde8f1e709f3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_316b9a37-6b1a-4349-b8f1-641507a4c795","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"316b9a37-6b1a-4349-b8f1-641507a4c795"},"owner":"root"},{"ociVersion":"1.2.1","id":"3756c5bf3594878b43508b81ed432be498abc5513e8de09a43db7a92ba375cc9","pid":3576,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3756c5bf3594878b43508b81ed432be498abc5513e8de09a43db7a92ba375cc9","rootfs":"/run/containerd/io.conta
inerd.runtime.v2.task/k8s.io/3756c5bf3594878b43508b81ed432be498abc5513e8de09a43db7a92ba375cc9/rootfs","created":"2025-10-13T14:23:48.796455766Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"67cdee844fa0f0e822335617194f6baf4a5a4273b45b7759a11a28b656d3c208","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-608191","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ef9f9755f56989d65b1d944c685f5df7"},"owner":"root"},{"ociVersion":"1.2.1","id":"661659159fd35d69f454d2aef90657716f18955de56af83f85316a76f6b27120","pid":1300,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/661659159fd35d69f454d2aef90657716f18955de56af83f85316a76f6b27120","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/661659159fd35d69f454d2aef90657716f18955de56af83f85316a76f6b27120/rootfs","c
reated":"2025-10-13T14:22:34.189772554Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"661659159fd35d69f454d2aef90657716f18955de56af83f85316a76f6b27120","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-608191_520d8ef85f5d2147b077c0cca7804b20","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-608191","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"520d8ef85f5d2147b077c0cca7804b20"},"owner":"root"},{"ociVersion":"1.2.1","id":"67cdee844fa0f0e822335617194f6baf4a5a4273b45b7759a11a28b656d3c208","pid":1307,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/67cdee844fa0f0e822335617194f6baf4a5a4273b45b7759a11a28b656d3c208","rootfs":"/run/containerd/io.containerd.
runtime.v2.task/k8s.io/67cdee844fa0f0e822335617194f6baf4a5a4273b45b7759a11a28b656d3c208/rootfs","created":"2025-10-13T14:22:34.198903293Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"67cdee844fa0f0e822335617194f6baf4a5a4273b45b7759a11a28b656d3c208","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-608191_ef9f9755f56989d65b1d944c685f5df7","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-608191","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ef9f9755f56989d65b1d944c685f5df7"},"owner":"root"},{"ociVersion":"1.2.1","id":"72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde","pid":3203,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72508a89014167f9db6746deac
adcc39d3ca4514e93ad689f070711e8fae5dde","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde/rootfs","created":"2025-10-13T14:23:31.726713832Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri.sandbox-id":"79d79fc021a2c031449013f14c1eb47bd1bdcece7a4c1aac2ac07cca7151aa92","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-b59r9","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"48c31bd1-4a65-4823-87e4-e49318896f91"},"owner":"root"},{"ociVersion":"1.2.1","id":"79d79fc021a2c031449013f14c1eb47bd1bdcece7a4c1aac2ac07cca7151aa92","pid":2034,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79d79fc021a2c031449013f14c1eb47bd1bdcece7a4c1aac2ac07cca7151aa92","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79d79fc021a
2c031449013f14c1eb47bd1bdcece7a4c1aac2ac07cca7151aa92/rootfs","created":"2025-10-13T14:22:46.71188548Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"79d79fc021a2c031449013f14c1eb47bd1bdcece7a4c1aac2ac07cca7151aa92","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-66bc5c9577-b59r9_48c31bd1-4a65-4823-87e4-e49318896f91","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-b59r9","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"48c31bd1-4a65-4823-87e4-e49318896f91"},"owner":"root"},{"ociVersion":"1.2.1","id":"cccbb832d47ca2786a5ae7d2719aaa93d7702411df955564d81889f2284df29b","pid":1739,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cccbb832d47ca2786a5ae7d2719aaa93d7702411df955564d81889f2284df
29b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cccbb832d47ca2786a5ae7d2719aaa93d7702411df955564d81889f2284df29b/rootfs","created":"2025-10-13T14:22:45.841588268Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"cccbb832d47ca2786a5ae7d2719aaa93d7702411df955564d81889f2284df29b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-cd8b5_68c1c059-66ad-4ca6-b600-fa382f929d3f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-cd8b5","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"68c1c059-66ad-4ca6-b600-fa382f929d3f"},"owner":"root"},{"ociVersion":"1.2.1","id":"ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158","pid":3776,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccd1d67
1f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158/rootfs","created":"2025-10-13T14:23:52.03080692Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"661659159fd35d69f454d2aef90657716f18955de56af83f85316a76f6b27120","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-608191","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"520d8ef85f5d2147b077c0cca7804b20"},"owner":"root"},{"ociVersion":"1.2.1","id":"d8c82bf329c20e0c5e2cae1e88c546a691f5e971f7f347786b347277b6b0db01","pid":1265,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8c82bf329c20e0c5e2cae1e88c546a691f5e971f7f347786b347277b6b0db01","rootfs":"/run/cont
ainerd/io.containerd.runtime.v2.task/k8s.io/d8c82bf329c20e0c5e2cae1e88c546a691f5e971f7f347786b347277b6b0db01/rootfs","created":"2025-10-13T14:22:34.170282347Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d8c82bf329c20e0c5e2cae1e88c546a691f5e971f7f347786b347277b6b0db01","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-608191_4aff0633e30644a590c2955148ff3f21","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-608191","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4aff0633e30644a590c2955148ff3f21"},"owner":"root"}]
	I1013 14:24:22.777502 1828239 cri.go:126] list returned 14 containers
	I1013 14:24:22.777511 1828239 cri.go:129] container: {ID:0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116 Status:running}
	I1013 14:24:22.777524 1828239 cri.go:135] skipping {0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116 running}: state = "running", want "paused"
	I1013 14:24:22.777531 1828239 cri.go:129] container: {ID:1136f8cb2bfdaa0453b7a8dbda93431ea5a1bbef26116f4a4253736de60d630d Status:running}
	I1013 14:24:22.777537 1828239 cri.go:131] skipping 1136f8cb2bfdaa0453b7a8dbda93431ea5a1bbef26116f4a4253736de60d630d - not in ps
	I1013 14:24:22.777540 1828239 cri.go:129] container: {ID:20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18 Status:running}
	I1013 14:24:22.777543 1828239 cri.go:135] skipping {20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18 running}: state = "running", want "paused"
	I1013 14:24:22.777546 1828239 cri.go:129] container: {ID:242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46 Status:running}
	I1013 14:24:22.777548 1828239 cri.go:135] skipping {242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46 running}: state = "running", want "paused"
	I1013 14:24:22.777550 1828239 cri.go:129] container: {ID:2fcc8135e9aa48e7874d35eebf3b8af484a70f5170287870ebcdae384ab33a9e Status:running}
	I1013 14:24:22.777557 1828239 cri.go:135] skipping {2fcc8135e9aa48e7874d35eebf3b8af484a70f5170287870ebcdae384ab33a9e running}: state = "running", want "paused"
	I1013 14:24:22.777559 1828239 cri.go:129] container: {ID:31e2b1fefe43df1713abcb0830c53368f55e44030d0a8f941c53fde8f1e709f3 Status:running}
	I1013 14:24:22.777566 1828239 cri.go:131] skipping 31e2b1fefe43df1713abcb0830c53368f55e44030d0a8f941c53fde8f1e709f3 - not in ps
	I1013 14:24:22.777568 1828239 cri.go:129] container: {ID:3756c5bf3594878b43508b81ed432be498abc5513e8de09a43db7a92ba375cc9 Status:running}
	I1013 14:24:22.777572 1828239 cri.go:135] skipping {3756c5bf3594878b43508b81ed432be498abc5513e8de09a43db7a92ba375cc9 running}: state = "running", want "paused"
	I1013 14:24:22.777574 1828239 cri.go:129] container: {ID:661659159fd35d69f454d2aef90657716f18955de56af83f85316a76f6b27120 Status:running}
	I1013 14:24:22.777578 1828239 cri.go:131] skipping 661659159fd35d69f454d2aef90657716f18955de56af83f85316a76f6b27120 - not in ps
	I1013 14:24:22.777579 1828239 cri.go:129] container: {ID:67cdee844fa0f0e822335617194f6baf4a5a4273b45b7759a11a28b656d3c208 Status:running}
	I1013 14:24:22.777581 1828239 cri.go:131] skipping 67cdee844fa0f0e822335617194f6baf4a5a4273b45b7759a11a28b656d3c208 - not in ps
	I1013 14:24:22.777584 1828239 cri.go:129] container: {ID:72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde Status:running}
	I1013 14:24:22.777587 1828239 cri.go:135] skipping {72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde running}: state = "running", want "paused"
	I1013 14:24:22.777591 1828239 cri.go:129] container: {ID:79d79fc021a2c031449013f14c1eb47bd1bdcece7a4c1aac2ac07cca7151aa92 Status:running}
	I1013 14:24:22.777594 1828239 cri.go:131] skipping 79d79fc021a2c031449013f14c1eb47bd1bdcece7a4c1aac2ac07cca7151aa92 - not in ps
	I1013 14:24:22.777596 1828239 cri.go:129] container: {ID:cccbb832d47ca2786a5ae7d2719aaa93d7702411df955564d81889f2284df29b Status:running}
	I1013 14:24:22.777599 1828239 cri.go:131] skipping cccbb832d47ca2786a5ae7d2719aaa93d7702411df955564d81889f2284df29b - not in ps
	I1013 14:24:22.777601 1828239 cri.go:129] container: {ID:ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158 Status:running}
	I1013 14:24:22.777605 1828239 cri.go:135] skipping {ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158 running}: state = "running", want "paused"
	I1013 14:24:22.777608 1828239 cri.go:129] container: {ID:d8c82bf329c20e0c5e2cae1e88c546a691f5e971f7f347786b347277b6b0db01 Status:running}
	I1013 14:24:22.777611 1828239 cri.go:131] skipping d8c82bf329c20e0c5e2cae1e88c546a691f5e971f7f347786b347277b6b0db01 - not in ps
	I1013 14:24:22.777658 1828239 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 14:24:22.792222 1828239 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 14:24:22.792235 1828239 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 14:24:22.792289 1828239 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 14:24:22.807360 1828239 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 14:24:22.807880 1828239 kubeconfig.go:125] found "functional-608191" server: "https://192.168.39.10:8441"
	I1013 14:24:22.809265 1828239 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 14:24:22.825274 1828239 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1013 14:24:22.825289 1828239 kubeadm.go:1160] stopping kube-system containers ...
	I1013 14:24:22.825304 1828239 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1013 14:24:22.825371 1828239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 14:24:22.874092 1828239 cri.go:89] found id: "ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158"
	I1013 14:24:22.874111 1828239 cri.go:89] found id: "20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18"
	I1013 14:24:22.874114 1828239 cri.go:89] found id: "3756c5bf3594878b43508b81ed432be498abc5513e8de09a43db7a92ba375cc9"
	I1013 14:24:22.874118 1828239 cri.go:89] found id: "0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116"
	I1013 14:24:22.874121 1828239 cri.go:89] found id: "242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46"
	I1013 14:24:22.874124 1828239 cri.go:89] found id: "72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde"
	I1013 14:24:22.874126 1828239 cri.go:89] found id: "2fcc8135e9aa48e7874d35eebf3b8af484a70f5170287870ebcdae384ab33a9e"
	I1013 14:24:22.874128 1828239 cri.go:89] found id: "5d494529ac46e168ef401a0f3abfc4c29823c7952c5d0d603191d632e9927969"
	I1013 14:24:22.874129 1828239 cri.go:89] found id: "65404fda21399f8e2093ac9e98d3bc62a59e8b65fa508fa8ff28590c208c9bb6"
	I1013 14:24:22.874135 1828239 cri.go:89] found id: "510429c5edae2af8f2b59fcab349d442fce243b6ae5c6b43fed60e140f637139"
	I1013 14:24:22.874137 1828239 cri.go:89] found id: "aa4c26afd56a1739b265e7591b41bd3eb7f30dd93e33ac950aa707edeeea83dc"
	I1013 14:24:22.874138 1828239 cri.go:89] found id: "e9ae5c7f05c42fd2e2d30e5c94a447197d7a3520bc4c9b4abbd8e2f332510087"
	I1013 14:24:22.874140 1828239 cri.go:89] found id: "02ef80a81d8fe25469e7142216e5e6e93881f43146502a5dc0f26db3256962e6"
	I1013 14:24:22.874142 1828239 cri.go:89] found id: "a04ad0a2c2f7af9a95868a524e710f1ad11465adb6aea5b7c740e033e87bdfcc"
	I1013 14:24:22.874143 1828239 cri.go:89] found id: ""
	I1013 14:24:22.874148 1828239 cri.go:252] Stopping containers: [ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158 20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18 3756c5bf3594878b43508b81ed432be498abc5513e8de09a43db7a92ba375cc9 0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116 242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46 72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde 2fcc8135e9aa48e7874d35eebf3b8af484a70f5170287870ebcdae384ab33a9e 5d494529ac46e168ef401a0f3abfc4c29823c7952c5d0d603191d632e9927969 65404fda21399f8e2093ac9e98d3bc62a59e8b65fa508fa8ff28590c208c9bb6 510429c5edae2af8f2b59fcab349d442fce243b6ae5c6b43fed60e140f637139 aa4c26afd56a1739b265e7591b41bd3eb7f30dd93e33ac950aa707edeeea83dc e9ae5c7f05c42fd2e2d30e5c94a447197d7a3520bc4c9b4abbd8e2f332510087 02ef80a81d8fe25469e7142216e5e6e93881f43146502a5dc0f26db3256962e6 a04ad0a2c2f7af9a95868a524e710f1ad11465adb6aea5b7c740e033e87bdfcc]
	I1013 14:24:22.874233 1828239 ssh_runner.go:195] Run: which crictl
	I1013 14:24:22.879458 1828239 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158 20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18 3756c5bf3594878b43508b81ed432be498abc5513e8de09a43db7a92ba375cc9 0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116 242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46 72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde 2fcc8135e9aa48e7874d35eebf3b8af484a70f5170287870ebcdae384ab33a9e 5d494529ac46e168ef401a0f3abfc4c29823c7952c5d0d603191d632e9927969 65404fda21399f8e2093ac9e98d3bc62a59e8b65fa508fa8ff28590c208c9bb6 510429c5edae2af8f2b59fcab349d442fce243b6ae5c6b43fed60e140f637139 aa4c26afd56a1739b265e7591b41bd3eb7f30dd93e33ac950aa707edeeea83dc e9ae5c7f05c42fd2e2d30e5c94a447197d7a3520bc4c9b4abbd8e2f332510087 02ef80a81d8fe25469e7142216e5e6e93881f43146502a5dc0f26db3256962e6 a04ad0a2c2f7af9a95868a524e710f1ad11465adb6aea5b7c740e033e87bdfcc
	I1013 14:24:38.430128 1828239 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158 20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18 3756c5bf3594878b43508b81ed432be498abc5513e8de09a43db7a92ba375cc9 0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116 242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46 72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde 2fcc8135e9aa48e7874d35eebf3b8af484a70f5170287870ebcdae384ab33a9e 5d494529ac46e168ef401a0f3abfc4c29823c7952c5d0d603191d632e9927969 65404fda21399f8e2093ac9e98d3bc62a59e8b65fa508fa8ff28590c208c9bb6 510429c5edae2af8f2b59fcab349d442fce243b6ae5c6b43fed60e140f637139 aa4c26afd56a1739b265e7591b41bd3eb7f30dd93e33ac950aa707edeeea83dc e9ae5c7f05c42fd2e2d30e5c94a447197d7a3520bc4c9b4abbd8e2f332510087 02ef80a81d8fe25469e7142216e5e6e93881f43146502a5dc0f26db3256962e6 a04ad0a2c2f7af9a95868a524e710f1ad11465adb6aea5b7c740e033e87bdfcc: (15.5
50618468s)
	I1013 14:24:38.430234 1828239 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1013 14:24:38.462019 1828239 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 14:24:38.476130 1828239 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 13 14:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5637 Oct 13 14:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5673 Oct 13 14:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5585 Oct 13 14:23 /etc/kubernetes/scheduler.conf
	
	I1013 14:24:38.476210 1828239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1013 14:24:38.488525 1828239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1013 14:24:38.501614 1828239 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1013 14:24:38.501672 1828239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 14:24:38.515051 1828239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1013 14:24:38.527130 1828239 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1013 14:24:38.527191 1828239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 14:24:38.540174 1828239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1013 14:24:38.552016 1828239 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1013 14:24:38.552088 1828239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 14:24:38.564953 1828239 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 14:24:38.578516 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 14:24:38.638833 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 14:24:39.453420 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1013 14:24:39.740848 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 14:24:39.819858 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1013 14:24:39.970626 1828239 api_server.go:52] waiting for apiserver process to appear ...
	I1013 14:24:39.970730 1828239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 14:24:40.004508 1828239 api_server.go:72] duration metric: took 33.902893ms to wait for apiserver process to appear ...
	I1013 14:24:40.004526 1828239 api_server.go:88] waiting for apiserver healthz status ...
	I1013 14:24:40.004551 1828239 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8441/healthz ...
	I1013 14:24:40.014345 1828239 api_server.go:279] https://192.168.39.10:8441/healthz returned 200:
	ok
	I1013 14:24:40.022483 1828239 api_server.go:141] control plane version: v1.34.1
	I1013 14:24:40.022504 1828239 api_server.go:131] duration metric: took 17.972581ms to wait for apiserver health ...
	I1013 14:24:40.022514 1828239 cni.go:84] Creating CNI manager for ""
	I1013 14:24:40.022520 1828239 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 14:24:40.024612 1828239 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 14:24:40.026198 1828239 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 14:24:40.073395 1828239 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 14:24:40.107885 1828239 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 14:24:40.112972 1828239 system_pods.go:59] 7 kube-system pods found
	I1013 14:24:40.113003 1828239 system_pods.go:61] "coredns-66bc5c9577-b59r9" [48c31bd1-4a65-4823-87e4-e49318896f91] Running
	I1013 14:24:40.113012 1828239 system_pods.go:61] "etcd-functional-608191" [6059102e-f5ef-478c-9d84-edf86b13d709] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 14:24:40.113018 1828239 system_pods.go:61] "kube-apiserver-functional-608191" [c62a234d-74f3-4c46-9d86-74cfbeea48c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 14:24:40.113027 1828239 system_pods.go:61] "kube-controller-manager-functional-608191" [49281ec6-c8e9-4978-9541-dd30106882ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 14:24:40.113031 1828239 system_pods.go:61] "kube-proxy-cd8b5" [68c1c059-66ad-4ca6-b600-fa382f929d3f] Running
	I1013 14:24:40.113035 1828239 system_pods.go:61] "kube-scheduler-functional-608191" [9e50261e-67ee-4dd5-9718-0b8485bfd319] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 14:24:40.113039 1828239 system_pods.go:61] "storage-provisioner" [316b9a37-6b1a-4349-b8f1-641507a4c795] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 14:24:40.113044 1828239 system_pods.go:74] duration metric: took 5.147016ms to wait for pod list to return data ...
	I1013 14:24:40.113052 1828239 node_conditions.go:102] verifying NodePressure condition ...
	I1013 14:24:40.116814 1828239 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 14:24:40.116831 1828239 node_conditions.go:123] node cpu capacity is 2
	I1013 14:24:40.116843 1828239 node_conditions.go:105] duration metric: took 3.787443ms to run NodePressure ...
	I1013 14:24:40.116895 1828239 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 14:24:40.404220 1828239 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1013 14:24:40.415465 1828239 kubeadm.go:743] kubelet initialised
	I1013 14:24:40.415481 1828239 kubeadm.go:744] duration metric: took 11.241157ms waiting for restarted kubelet to initialise ...
	I1013 14:24:40.415502 1828239 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 14:24:40.442654 1828239 ops.go:34] apiserver oom_adj: -16
	I1013 14:24:40.442669 1828239 kubeadm.go:601] duration metric: took 17.650428922s to restartPrimaryControlPlane
	I1013 14:24:40.442679 1828239 kubeadm.go:402] duration metric: took 17.743048865s to StartCluster
	I1013 14:24:40.442703 1828239 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:24:40.442799 1828239 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 14:24:40.443419 1828239 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 14:24:40.443664 1828239 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 14:24:40.443751 1828239 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 14:24:40.443866 1828239 addons.go:69] Setting storage-provisioner=true in profile "functional-608191"
	I1013 14:24:40.443894 1828239 addons.go:238] Setting addon storage-provisioner=true in "functional-608191"
	W1013 14:24:40.443902 1828239 addons.go:247] addon storage-provisioner should already be in state true
	I1013 14:24:40.443897 1828239 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 14:24:40.443936 1828239 host.go:66] Checking if "functional-608191" exists ...
	I1013 14:24:40.443942 1828239 addons.go:69] Setting default-storageclass=true in profile "functional-608191"
	I1013 14:24:40.443971 1828239 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-608191"
	I1013 14:24:40.444322 1828239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:24:40.444366 1828239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:24:40.444499 1828239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:24:40.444542 1828239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:24:40.445580 1828239 out.go:179] * Verifying Kubernetes components...
	I1013 14:24:40.447068 1828239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 14:24:40.459760 1828239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I1013 14:24:40.460288 1828239 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:24:40.460342 1828239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
	I1013 14:24:40.460852 1828239 main.go:141] libmachine: Using API Version  1
	I1013 14:24:40.460864 1828239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:24:40.460909 1828239 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:24:40.461294 1828239 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:24:40.461411 1828239 main.go:141] libmachine: Using API Version  1
	I1013 14:24:40.461432 1828239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:24:40.461839 1828239 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:24:40.461890 1828239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:24:40.461939 1828239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:24:40.462082 1828239 main.go:141] libmachine: (functional-608191) Calling .GetState
	I1013 14:24:40.465572 1828239 addons.go:238] Setting addon default-storageclass=true in "functional-608191"
	W1013 14:24:40.465585 1828239 addons.go:247] addon default-storageclass should already be in state true
	I1013 14:24:40.465615 1828239 host.go:66] Checking if "functional-608191" exists ...
	I1013 14:24:40.466100 1828239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:24:40.466146 1828239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:24:40.478254 1828239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I1013 14:24:40.478804 1828239 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:24:40.479375 1828239 main.go:141] libmachine: Using API Version  1
	I1013 14:24:40.479400 1828239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:24:40.479872 1828239 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:24:40.480148 1828239 main.go:141] libmachine: (functional-608191) Calling .GetState
	I1013 14:24:40.480701 1828239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I1013 14:24:40.481133 1828239 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:24:40.481598 1828239 main.go:141] libmachine: Using API Version  1
	I1013 14:24:40.481619 1828239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:24:40.482051 1828239 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:24:40.482541 1828239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:24:40.482577 1828239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:24:40.482605 1828239 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:24:40.484154 1828239 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 14:24:40.485428 1828239 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 14:24:40.485438 1828239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 14:24:40.485457 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:40.489940 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:40.490613 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:40.490634 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:40.490923 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
	I1013 14:24:40.491175 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:40.491368 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
	I1013 14:24:40.491540 1828239 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
	I1013 14:24:40.498086 1828239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I1013 14:24:40.498616 1828239 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:24:40.499168 1828239 main.go:141] libmachine: Using API Version  1
	I1013 14:24:40.499207 1828239 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:24:40.499645 1828239 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:24:40.499891 1828239 main.go:141] libmachine: (functional-608191) Calling .GetState
	I1013 14:24:40.502168 1828239 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:24:40.502421 1828239 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 14:24:40.502437 1828239 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 14:24:40.502457 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
	I1013 14:24:40.505951 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:40.506561 1828239 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
	I1013 14:24:40.506586 1828239 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
	I1013 14:24:40.506834 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
	I1013 14:24:40.507066 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
	I1013 14:24:40.507305 1828239 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
	I1013 14:24:40.507515 1828239 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
	I1013 14:24:40.748509 1828239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 14:24:40.810366 1828239 node_ready.go:35] waiting up to 6m0s for node "functional-608191" to be "Ready" ...
	I1013 14:24:40.824754 1828239 node_ready.go:49] node "functional-608191" is "Ready"
	I1013 14:24:40.824789 1828239 node_ready.go:38] duration metric: took 14.35005ms for node "functional-608191" to be "Ready" ...
	I1013 14:24:40.824810 1828239 api_server.go:52] waiting for apiserver process to appear ...
	I1013 14:24:40.824883 1828239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 14:24:40.913262 1828239 api_server.go:72] duration metric: took 469.560161ms to wait for apiserver process to appear ...
	I1013 14:24:40.913285 1828239 api_server.go:88] waiting for apiserver healthz status ...
	I1013 14:24:40.913310 1828239 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8441/healthz ...
	I1013 14:24:40.944575 1828239 api_server.go:279] https://192.168.39.10:8441/healthz returned 200:
	ok
	I1013 14:24:40.948929 1828239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 14:24:40.971124 1828239 api_server.go:141] control plane version: v1.34.1
	I1013 14:24:40.971147 1828239 api_server.go:131] duration metric: took 57.856105ms to wait for apiserver health ...
	I1013 14:24:40.971156 1828239 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 14:24:41.007287 1828239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 14:24:41.055164 1828239 system_pods.go:59] 7 kube-system pods found
	I1013 14:24:41.055185 1828239 system_pods.go:61] "coredns-66bc5c9577-b59r9" [48c31bd1-4a65-4823-87e4-e49318896f91] Running
	I1013 14:24:41.055192 1828239 system_pods.go:61] "etcd-functional-608191" [6059102e-f5ef-478c-9d84-edf86b13d709] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 14:24:41.055200 1828239 system_pods.go:61] "kube-apiserver-functional-608191" [c62a234d-74f3-4c46-9d86-74cfbeea48c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 14:24:41.055205 1828239 system_pods.go:61] "kube-controller-manager-functional-608191" [49281ec6-c8e9-4978-9541-dd30106882ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 14:24:41.055211 1828239 system_pods.go:61] "kube-proxy-cd8b5" [68c1c059-66ad-4ca6-b600-fa382f929d3f] Running
	I1013 14:24:41.055216 1828239 system_pods.go:61] "kube-scheduler-functional-608191" [9e50261e-67ee-4dd5-9718-0b8485bfd319] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 14:24:41.055219 1828239 system_pods.go:61] "storage-provisioner" [316b9a37-6b1a-4349-b8f1-641507a4c795] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 14:24:41.055224 1828239 system_pods.go:74] duration metric: took 84.06469ms to wait for pod list to return data ...
	I1013 14:24:41.055233 1828239 default_sa.go:34] waiting for default service account to be created ...
	I1013 14:24:41.078587 1828239 default_sa.go:45] found service account: "default"
	I1013 14:24:41.078608 1828239 default_sa.go:55] duration metric: took 23.36803ms for default service account to be created ...
	I1013 14:24:41.078619 1828239 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 14:24:41.109473 1828239 system_pods.go:86] 7 kube-system pods found
	I1013 14:24:41.109496 1828239 system_pods.go:89] "coredns-66bc5c9577-b59r9" [48c31bd1-4a65-4823-87e4-e49318896f91] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 14:24:41.109502 1828239 system_pods.go:89] "etcd-functional-608191" [6059102e-f5ef-478c-9d84-edf86b13d709] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 14:24:41.109509 1828239 system_pods.go:89] "kube-apiserver-functional-608191" [c62a234d-74f3-4c46-9d86-74cfbeea48c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 14:24:41.109514 1828239 system_pods.go:89] "kube-controller-manager-functional-608191" [49281ec6-c8e9-4978-9541-dd30106882ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 14:24:41.109516 1828239 system_pods.go:89] "kube-proxy-cd8b5" [68c1c059-66ad-4ca6-b600-fa382f929d3f] Running
	I1013 14:24:41.109523 1828239 system_pods.go:89] "kube-scheduler-functional-608191" [9e50261e-67ee-4dd5-9718-0b8485bfd319] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 14:24:41.109527 1828239 system_pods.go:89] "storage-provisioner" [316b9a37-6b1a-4349-b8f1-641507a4c795] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 14:24:41.109534 1828239 system_pods.go:126] duration metric: took 30.909547ms to wait for k8s-apps to be running ...
	I1013 14:24:41.109542 1828239 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 14:24:41.109593 1828239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 14:24:45.200918 1828239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.251962343s)
	I1013 14:24:45.200975 1828239 main.go:141] libmachine: Making call to close driver server
	I1013 14:24:45.200986 1828239 main.go:141] libmachine: (functional-608191) Calling .Close
	I1013 14:24:45.200993 1828239 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.193680182s)
	I1013 14:24:45.201025 1828239 main.go:141] libmachine: Making call to close driver server
	I1013 14:24:45.201031 1828239 main.go:141] libmachine: (functional-608191) Calling .Close
	I1013 14:24:45.201075 1828239 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.091467083s)
	I1013 14:24:45.201089 1828239 system_svc.go:56] duration metric: took 4.091543765s WaitForService to wait for kubelet
	I1013 14:24:45.201097 1828239 kubeadm.go:586] duration metric: took 4.757404661s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 14:24:45.201118 1828239 node_conditions.go:102] verifying NodePressure condition ...
	I1013 14:24:45.201307 1828239 main.go:141] libmachine: Successfully made call to close driver server
	I1013 14:24:45.201314 1828239 main.go:141] libmachine: Successfully made call to close driver server
	I1013 14:24:45.201330 1828239 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 14:24:45.201342 1828239 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
	I1013 14:24:45.201344 1828239 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 14:24:45.201349 1828239 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
	I1013 14:24:45.201351 1828239 main.go:141] libmachine: Making call to close driver server
	I1013 14:24:45.201359 1828239 main.go:141] libmachine: (functional-608191) Calling .Close
	I1013 14:24:45.201406 1828239 main.go:141] libmachine: Making call to close driver server
	I1013 14:24:45.201411 1828239 main.go:141] libmachine: (functional-608191) Calling .Close
	I1013 14:24:45.201637 1828239 main.go:141] libmachine: Successfully made call to close driver server
	I1013 14:24:45.201651 1828239 main.go:141] libmachine: Successfully made call to close driver server
	I1013 14:24:45.201659 1828239 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 14:24:45.201659 1828239 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 14:24:45.212852 1828239 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 14:24:45.212869 1828239 node_conditions.go:123] node cpu capacity is 2
	I1013 14:24:45.212881 1828239 node_conditions.go:105] duration metric: took 11.75898ms to run NodePressure ...
	I1013 14:24:45.212894 1828239 start.go:241] waiting for startup goroutines ...
	I1013 14:24:45.215699 1828239 main.go:141] libmachine: Making call to close driver server
	I1013 14:24:45.215730 1828239 main.go:141] libmachine: (functional-608191) Calling .Close
	I1013 14:24:45.216069 1828239 main.go:141] libmachine: Successfully made call to close driver server
	I1013 14:24:45.216081 1828239 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 14:24:45.216105 1828239 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
	I1013 14:24:45.218107 1828239 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 14:24:45.219608 1828239 addons.go:514] duration metric: took 4.775865611s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 14:24:45.219645 1828239 start.go:246] waiting for cluster config update ...
	I1013 14:24:45.219660 1828239 start.go:255] writing updated cluster config ...
	I1013 14:24:45.220054 1828239 ssh_runner.go:195] Run: rm -f paused
	I1013 14:24:45.226588 1828239 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:24:45.231003 1828239 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b59r9" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 14:24:47.244546 1828239 pod_ready.go:104] pod "coredns-66bc5c9577-b59r9" is not "Ready", error: <nil>
	W1013 14:24:49.737206 1828239 pod_ready.go:104] pod "coredns-66bc5c9577-b59r9" is not "Ready", error: <nil>
	I1013 14:24:51.237960 1828239 pod_ready.go:94] pod "coredns-66bc5c9577-b59r9" is "Ready"
	I1013 14:24:51.237999 1828239 pod_ready.go:86] duration metric: took 6.006974101s for pod "coredns-66bc5c9577-b59r9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:24:51.240926 1828239 pod_ready.go:83] waiting for pod "etcd-functional-608191" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:24:51.246414 1828239 pod_ready.go:94] pod "etcd-functional-608191" is "Ready"
	I1013 14:24:51.246430 1828239 pod_ready.go:86] duration metric: took 5.489419ms for pod "etcd-functional-608191" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:24:51.248821 1828239 pod_ready.go:83] waiting for pod "kube-apiserver-functional-608191" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 14:24:53.255394 1828239 pod_ready.go:104] pod "kube-apiserver-functional-608191" is not "Ready", error: <nil>
	W1013 14:24:55.255873 1828239 pod_ready.go:104] pod "kube-apiserver-functional-608191" is not "Ready", error: <nil>
	I1013 14:24:57.755352 1828239 pod_ready.go:94] pod "kube-apiserver-functional-608191" is "Ready"
	I1013 14:24:57.755369 1828239 pod_ready.go:86] duration metric: took 6.506535242s for pod "kube-apiserver-functional-608191" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:24:57.757948 1828239 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-608191" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:24:58.763844 1828239 pod_ready.go:94] pod "kube-controller-manager-functional-608191" is "Ready"
	I1013 14:24:58.763862 1828239 pod_ready.go:86] duration metric: took 1.005895758s for pod "kube-controller-manager-functional-608191" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:24:58.765856 1828239 pod_ready.go:83] waiting for pod "kube-proxy-cd8b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:24:58.771157 1828239 pod_ready.go:94] pod "kube-proxy-cd8b5" is "Ready"
	I1013 14:24:58.771172 1828239 pod_ready.go:86] duration metric: took 5.300891ms for pod "kube-proxy-cd8b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:24:58.773383 1828239 pod_ready.go:83] waiting for pod "kube-scheduler-functional-608191" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:24:58.777872 1828239 pod_ready.go:94] pod "kube-scheduler-functional-608191" is "Ready"
	I1013 14:24:58.777886 1828239 pod_ready.go:86] duration metric: took 4.491319ms for pod "kube-scheduler-functional-608191" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 14:24:58.777894 1828239 pod_ready.go:40] duration metric: took 13.551278865s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 14:24:58.825489 1828239 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 14:24:58.827526 1828239 out.go:179] * Done! kubectl is now configured to use "functional-608191" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b3815b3d85db       56cc512116c8f       6 minutes ago       Exited              mount-munger              0                   e1b3239f98d8c       busybox-mount
	54e018365168b       c3994bc696102       6 minutes ago       Running             kube-apiserver            1                   ae5cac3c5f135       kube-apiserver-functional-608191
	73c62ac23dcef       52546a367cc9e       6 minutes ago       Running             coredns                   2                   79d79fc021a2c       coredns-66bc5c9577-b59r9
	0bdcff79b6f2e       6e38f40d628db       6 minutes ago       Running             storage-provisioner       3                   31e2b1fefe43d       storage-provisioner
	9923b9c3b6134       c3994bc696102       6 minutes ago       Exited              kube-apiserver            0                   ae5cac3c5f135       kube-apiserver-functional-608191
	e3f11c67de677       c80c8dbafe7dd       6 minutes ago       Running             kube-controller-manager   2                   661659159fd35       kube-controller-manager-functional-608191
	552b6794b2ecf       7dd6aaa1717ab       6 minutes ago       Running             kube-scheduler            2                   d8c82bf329c20       kube-scheduler-functional-608191
	19906e68c850c       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       2                   31e2b1fefe43d       storage-provisioner
	b3d48b09ac4ab       fc25172553d79       6 minutes ago       Running             kube-proxy                2                   cccbb832d47ca       kube-proxy-cd8b5
	c9db6877437dc       5f1f5298c888d       6 minutes ago       Running             etcd                      2                   1136f8cb2bfda       etcd-functional-608191
	ccd1d671f4ad2       c80c8dbafe7dd       7 minutes ago       Exited              kube-controller-manager   1                   661659159fd35       kube-controller-manager-functional-608191
	20139c80c2b89       7dd6aaa1717ab       7 minutes ago       Exited              kube-scheduler            1                   d8c82bf329c20       kube-scheduler-functional-608191
	0ff2c0af6db42       5f1f5298c888d       7 minutes ago       Exited              etcd                      1                   1136f8cb2bfda       etcd-functional-608191
	242b510b56dc9       fc25172553d79       7 minutes ago       Exited              kube-proxy                1                   cccbb832d47ca       kube-proxy-cd8b5
	72508a8901416       52546a367cc9e       7 minutes ago       Exited              coredns                   1                   79d79fc021a2c       coredns-66bc5c9577-b59r9
	
	
	==> containerd <==
	Oct 13 14:28:21 functional-608191 containerd[4454]: time="2025-10-13T14:28:21.929764676Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 13 14:28:21 functional-608191 containerd[4454]: time="2025-10-13T14:28:21.934666151Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:28:22 functional-608191 containerd[4454]: time="2025-10-13T14:28:22.010356918Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:28:22 functional-608191 containerd[4454]: time="2025-10-13T14:28:22.107013790Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:28:22 functional-608191 containerd[4454]: time="2025-10-13T14:28:22.107132980Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10998"
	Oct 13 14:30:45 functional-608191 containerd[4454]: time="2025-10-13T14:30:45.932895822Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Oct 13 14:30:45 functional-608191 containerd[4454]: time="2025-10-13T14:30:45.936195556Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:30:46 functional-608191 containerd[4454]: time="2025-10-13T14:30:46.039429277Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:30:46 functional-608191 containerd[4454]: time="2025-10-13T14:30:46.138786342Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:30:46 functional-608191 containerd[4454]: time="2025-10-13T14:30:46.138839052Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Oct 13 14:30:48 functional-608191 containerd[4454]: time="2025-10-13T14:30:48.929085148Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 13 14:30:48 functional-608191 containerd[4454]: time="2025-10-13T14:30:48.932918783Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:30:48 functional-608191 containerd[4454]: time="2025-10-13T14:30:48.998895834Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:30:49 functional-608191 containerd[4454]: time="2025-10-13T14:30:49.105029984Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:30:49 functional-608191 containerd[4454]: time="2025-10-13T14:30:49.105118672Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Oct 13 14:30:51 functional-608191 containerd[4454]: time="2025-10-13T14:30:51.929735421Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 13 14:30:51 functional-608191 containerd[4454]: time="2025-10-13T14:30:51.933139862Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:30:52 functional-608191 containerd[4454]: time="2025-10-13T14:30:52.000058628Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:30:52 functional-608191 containerd[4454]: time="2025-10-13T14:30:52.103995322Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:30:52 functional-608191 containerd[4454]: time="2025-10-13T14:30:52.104016622Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 13 14:31:09 functional-608191 containerd[4454]: time="2025-10-13T14:31:09.929417976Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 13 14:31:09 functional-608191 containerd[4454]: time="2025-10-13T14:31:09.932826552Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:31:09 functional-608191 containerd[4454]: time="2025-10-13T14:31:09.998285620Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:31:10 functional-608191 containerd[4454]: time="2025-10-13T14:31:10.094451955Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:31:10 functional-608191 containerd[4454]: time="2025-10-13T14:31:10.094635604Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	
	
	==> coredns [72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36858 - 65360 "HINFO IN 3005092589584362483.1560966083017627098. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026785639s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [73c62ac23dcef061db1a2cf49c532093463ee196addc24e97307ab20dcf5aeec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35999 - 64742 "HINFO IN 8601583101275943645.7322847173454900088. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031744201s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	
	
	==> describe nodes <==
	Name:               functional-608191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-608191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=functional-608191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T14_22_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 14:22:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-608191
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:31:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    functional-608191
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3422538a8174bd0af79b99fa0817bbd
	  System UUID:                f3422538-a817-4bd0-af79-b99fa0817bbd
	  Boot ID:                    fe252248-25b4-47d2-aaf1-51a9660115e5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7d8vj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  default                     hello-node-connect-7d85dfc575-6qw7q          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     mysql-5bb876957f-bpcvp                       600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m9s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-b59r9                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m30s
	  kube-system                 etcd-functional-608191                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m35s
	  kube-system                 kube-apiserver-functional-608191             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-functional-608191    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-proxy-cd8b5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-scheduler-functional-608191             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m28s                  kube-proxy       
	  Normal  Starting                 6m38s                  kube-proxy       
	  Normal  Starting                 7m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m35s                  kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m35s                  kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s                  kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m35s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m34s                  kubelet          Node functional-608191 status is now: NodeReady
	  Normal  RegisteredNode           8m31s                  node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	  Normal  NodeHasSufficientMemory  7m25s (x8 over 7m25s)  kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    7m25s (x8 over 7m25s)  kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m25s (x7 over 7m25s)  kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m20s                  node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	  Normal  Starting                 6m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m35s                  kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s                  kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s                  kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m27s                  node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	
	
	==> dmesg <==
	[  +0.007712] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.179092] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085500] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.109826] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.093375] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.131370] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.239173] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.906606] kauditd_printk_skb: 283 callbacks suppressed
	[Oct13 14:23] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.987192] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.060246] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.772942] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.876433] kauditd_printk_skb: 18 callbacks suppressed
	[  +2.906041] kauditd_printk_skb: 66 callbacks suppressed
	[Oct13 14:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.122131] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.245935] kauditd_printk_skb: 108 callbacks suppressed
	[  +4.172113] kauditd_printk_skb: 67 callbacks suppressed
	[  +8.058295] kauditd_printk_skb: 143 callbacks suppressed
	[Oct13 14:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.013503] kauditd_printk_skb: 72 callbacks suppressed
	[  +3.195165] kauditd_printk_skb: 129 callbacks suppressed
	[  +9.784794] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116] <==
	{"level":"warn","ts":"2025-10-13T14:23:50.478642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.507645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.509654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.535663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.545046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.565385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.653235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43858","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T14:24:33.216994Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T14:24:33.217137Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-608191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	{"level":"error","ts":"2025-10-13T14:24:33.217254Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T14:24:33.219298Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.219358Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f8926bd555ec3d0e","current-leader-member-id":"f8926bd555ec3d0e"}
	{"level":"info","ts":"2025-10-13T14:24:33.219480Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T14:24:33.219512Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-13T14:24:33.219213Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220399Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220436Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T14:24:33.220454Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.10:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220027Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220466Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T14:24:33.220473Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.224162Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"error","ts":"2025-10-13T14:24:33.224284Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.10:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.224309Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-10-13T14:24:33.224316Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-608191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	
	
	==> etcd [c9db6877437dc31eee9418cd82cb8418bccd7b125cd05fa5d3cb86774972e283] <==
	{"level":"warn","ts":"2025-10-13T14:24:43.712361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.724439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.735146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.755356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.766429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.779720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.797671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.808981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.823706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.834745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.849532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.864251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.890234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.903686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.914674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.934259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.947959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.965331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.980932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.008421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.020181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.034953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.045765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.058431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.158722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56634","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:31:15 up 9 min,  0 users,  load average: 0.47, 0.34, 0.24
	Linux functional-608191 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [54e018365168b8ec6573769c8afa96e9b89eb529f2d32db595e00c0895ec563b] <==
	I1013 14:24:44.940705       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 14:24:44.941220       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 14:24:44.952886       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 14:24:44.953319       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 14:24:44.954494       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 14:24:44.955031       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 14:24:44.955259       1 aggregator.go:171] initial CRD sync complete...
	I1013 14:24:44.955267       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 14:24:44.955275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 14:24:44.955279       1 cache.go:39] Caches are synced for autoregister controller
	I1013 14:24:44.962085       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 14:24:44.973348       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 14:24:44.983002       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 14:24:44.983039       1 policy_source.go:240] refreshing policies
	I1013 14:24:45.050478       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 14:24:45.734011       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 14:24:46.854394       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 14:24:48.297742       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 14:24:48.389233       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 14:24:48.547871       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 14:24:48.606961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 14:25:02.306998       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.126.250"}
	I1013 14:25:06.844289       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.60.71"}
	I1013 14:25:08.658755       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.255.215"}
	I1013 14:25:24.277694       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.51.235"}
	
	
	==> kube-apiserver [9923b9c3b6134565e2005a755337ee1e6d742736c6e3c9f98efee81bd4d5802c] <==
	I1013 14:24:41.642829       1 options.go:263] external host was not specified, using 192.168.39.10
	I1013 14:24:41.668518       1 server.go:150] Version: v1.34.1
	I1013 14:24:41.668782       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1013 14:24:41.675050       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158] <==
	I1013 14:23:55.330332       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 14:23:55.332979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 14:23:55.334816       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 14:23:55.338137       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 14:23:55.340515       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 14:23:55.341252       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 14:23:55.342131       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 14:23:55.342984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 14:23:55.343841       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 14:23:55.345916       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 14:23:55.345995       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 14:23:55.347514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 14:23:55.351800       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 14:23:55.354174       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 14:23:55.354237       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 14:23:55.365427       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 14:23:55.365646       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 14:23:55.365690       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 14:23:55.366053       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 14:23:55.367408       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 14:23:55.368689       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 14:23:55.368714       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 14:23:55.369361       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 14:23:55.369864       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-608191"
	I1013 14:23:55.370627       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-controller-manager [e3f11c67de677fc441824afcbe3a763614b71997830a304ba906478e55265073] <==
	I1013 14:24:48.244051       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 14:24:48.244204       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 14:24:48.244243       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 14:24:48.244322       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 14:24:48.246286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 14:24:48.247607       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 14:24:48.247635       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 14:24:48.247643       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 14:24:48.251970       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 14:24:48.257247       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 14:24:48.264697       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 14:24:48.268125       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 14:24:48.270387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 14:24:48.277540       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 14:24:48.281492       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 14:24:48.281911       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 14:24:48.282455       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-608191"
	I1013 14:24:48.283532       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 14:24:48.285291       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 14:24:48.285421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 14:24:48.286191       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 14:24:48.286359       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 14:24:48.286219       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 14:24:48.287912       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 14:24:48.298773       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46] <==
	I1013 14:23:31.892503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 14:23:31.993145       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 14:23:31.993192       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.10"]
	E1013 14:23:31.993261       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 14:23:32.032777       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 14:23:32.032888       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 14:23:32.032925       1 server_linux.go:132] "Using iptables Proxier"
	I1013 14:23:32.044710       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 14:23:32.045212       1 server.go:527] "Version info" version="v1.34.1"
	I1013 14:23:32.045242       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:23:32.050189       1 config.go:200] "Starting service config controller"
	I1013 14:23:32.050219       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 14:23:32.050262       1 config.go:309] "Starting node config controller"
	I1013 14:23:32.050283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 14:23:32.050289       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 14:23:32.050702       1 config.go:106] "Starting endpoint slice config controller"
	I1013 14:23:32.050711       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 14:23:32.050725       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 14:23:32.050728       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 14:23:32.151068       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 14:23:32.151213       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 14:23:32.152939       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b3d48b09ac4ab7f97ae8dd7256135561a415508f359989ac4035b756c0b49b56] <==
	I1013 14:24:34.497361       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 14:24:36.901830       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 14:24:36.901992       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.10"]
	E1013 14:24:36.902089       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 14:24:36.962936       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 14:24:36.963219       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 14:24:36.963260       1 server_linux.go:132] "Using iptables Proxier"
	I1013 14:24:36.979965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 14:24:36.982117       1 server.go:527] "Version info" version="v1.34.1"
	I1013 14:24:36.982140       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:24:36.987101       1 config.go:200] "Starting service config controller"
	I1013 14:24:36.987189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 14:24:36.987210       1 config.go:106] "Starting endpoint slice config controller"
	I1013 14:24:36.987213       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 14:24:36.987227       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 14:24:36.987230       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 14:24:36.989952       1 config.go:309] "Starting node config controller"
	I1013 14:24:36.989984       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 14:24:36.989991       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 14:24:37.087813       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 14:24:37.087864       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 14:24:37.087892       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18] <==
	I1013 14:23:52.516984       1 serving.go:386] Generated self-signed cert in-memory
	I1013 14:23:53.392891       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 14:23:53.393645       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:23:53.416434       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 14:23:53.416479       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 14:23:53.416526       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:23:53.416539       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:23:53.416616       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.416626       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.426367       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 14:23:53.427869       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 14:23:53.517412       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.517510       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 14:23:53.522735       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:23.014501       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 14:24:23.014800       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 14:24:23.014930       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 14:24:23.015015       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:23.015038       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1013 14:24:23.015060       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:24:23.016307       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 14:24:23.016453       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [552b6794b2ecff0f2c2558459d0aa52965219db398dc9269aade313c2bb7c25e] <==
	I1013 14:24:42.686856       1 serving.go:386] Generated self-signed cert in-memory
	W1013 14:24:44.871016       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 14:24:44.871060       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 14:24:44.871069       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 14:24:44.871075       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 14:24:44.971082       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 14:24:44.973132       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:24:44.980825       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:44.980854       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:44.981656       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 14:24:44.981718       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 14:24:45.083704       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 14:30:36 functional-608191 kubelet[5339]: E1013 14:30:36.928109    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:30:36 functional-608191 kubelet[5339]: E1013 14:30:36.928330    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:30:42 functional-608191 kubelet[5339]: E1013 14:30:42.928973    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:30:46 functional-608191 kubelet[5339]: E1013 14:30:46.139230    5339 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 13 14:30:46 functional-608191 kubelet[5339]: E1013 14:30:46.139306    5339 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 13 14:30:46 functional-608191 kubelet[5339]: E1013 14:30:46.139418    5339 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-bpcvp_default(7939308f-4ee2-4691-9165-79aacfa8e749): ErrImagePull: failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:30:46 functional-608191 kubelet[5339]: E1013 14:30:46.139454    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:30:49 functional-608191 kubelet[5339]: E1013 14:30:49.105352    5339 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 13 14:30:49 functional-608191 kubelet[5339]: E1013 14:30:49.105402    5339 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 13 14:30:49 functional-608191 kubelet[5339]: E1013 14:30:49.105494    5339 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-6qw7q_default(1804e076-c32c-4353-bff8-6c40d2b36a56): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:30:49 functional-608191 kubelet[5339]: E1013 14:30:49.105527    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:30:52 functional-608191 kubelet[5339]: E1013 14:30:52.104374    5339 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 13 14:30:52 functional-608191 kubelet[5339]: E1013 14:30:52.104441    5339 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 13 14:30:52 functional-608191 kubelet[5339]: E1013 14:30:52.104599    5339 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(e9c2282b-16f1-4201-a7d5-96801043f1ec): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:30:52 functional-608191 kubelet[5339]: E1013 14:30:52.104637    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:30:55 functional-608191 kubelet[5339]: E1013 14:30:55.928169    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:30:57 functional-608191 kubelet[5339]: E1013 14:30:57.929692    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:31:01 functional-608191 kubelet[5339]: E1013 14:31:01.928740    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:31:06 functional-608191 kubelet[5339]: E1013 14:31:06.928820    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:31:10 functional-608191 kubelet[5339]: E1013 14:31:10.094810    5339 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 13 14:31:10 functional-608191 kubelet[5339]: E1013 14:31:10.094883    5339 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 13 14:31:10 functional-608191 kubelet[5339]: E1013 14:31:10.094971    5339 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-7d8vj_default(57a285cb-fa31-4321-96bf-bbbd20c61bc2): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:31:10 functional-608191 kubelet[5339]: E1013 14:31:10.095005    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:31:10 functional-608191 kubelet[5339]: E1013 14:31:10.930484    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:31:12 functional-608191 kubelet[5339]: E1013 14:31:12.928984    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	
	
	==> storage-provisioner [0bdcff79b6f2eb18fd6df3944342b3f5a2cf125d450367aeaefda23398799bad] <==
	W1013 14:30:50.503995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:30:52.507709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:30:52.517220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:30:54.521987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:30:54.527870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:30:56.531227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:30:56.541131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:30:58.544996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:30:58.551248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:00.555064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:00.565119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:02.568918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:02.574014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:04.577716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:04.583097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:06.586836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:06.592873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:08.596944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:08.602001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:10.605794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:10.616851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:12.620917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:12.627002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:14.631037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:31:14.643637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [19906e68c850cc4d2665f6dca007cff3878b00054b2f9e7752b01a49703c8a5b] <==
	I1013 14:24:35.231238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 14:24:35.233267       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-608191 -n functional-608191
helpers_test.go:269: (dbg) Run:  kubectl --context functional-608191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:09 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://6b3815b3d85db29741068c9a9b97514906bd1ef352cdf42ca5d2734f39a724e6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 13 Oct 2025 14:25:11 +0000
	      Finished:     Mon, 13 Oct 2025 14:25:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wpkbq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wpkbq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m7s  default-scheduler  Successfully assigned default/busybox-mount to functional-608191
	  Normal  Pulling    6m6s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m5s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.486s (1.486s including waiting). Image size: 2395207 bytes.
	  Normal  Created    6m5s  kubelet            Created container: mount-munger
	  Normal  Started    6m5s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7d8vj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gctw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6gctw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m52s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7d8vj to functional-608191
	  Normal   Pulling    2m55s (x5 over 5m52s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m54s (x5 over 5m52s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m54s (x5 over 5m52s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    48s (x21 over 5m51s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     48s (x21 over 5m51s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-6qw7q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgfsd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cgfsd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m8s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6qw7q to functional-608191
	  Normal   Pulling    3m11s (x5 over 6m7s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m10s (x5 over 6m7s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m10s (x5 over 6m7s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    54s (x21 over 6m6s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     54s (x21 over 6m6s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-bpcvp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:06 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vtwds (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vtwds:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m10s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-bpcvp to functional-608191
	  Normal   Pulling    3m23s (x5 over 6m9s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m22s (x5 over 6m9s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m22s (x5 over 6m9s)  kubelet            Error: ErrImagePull
	  Warning  Failed     67s (x20 over 6m8s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    55s (x21 over 6m8s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:14 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kdqfp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kdqfp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-608191
	  Normal   Pulling    3m9s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m8s (x5 over 6m2s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m8s (x5 over 6m2s)  kubelet            Error: ErrImagePull
	  Warning  Failed     53s (x20 over 6m1s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    40s (x21 over 6m1s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (370.78s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-608191 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-bpcvp" [7939308f-4ee2-4691-9165-79aacfa8e749] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-608191 -n functional-608191
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-10-13 14:35:07.188198044 +0000 UTC m=+2398.128756432
functional_test.go:1804: (dbg) Run:  kubectl --context functional-608191 describe po mysql-5bb876957f-bpcvp -n default
functional_test.go:1804: (dbg) kubectl --context functional-608191 describe po mysql-5bb876957f-bpcvp -n default:
Name:             mysql-5bb876957f-bpcvp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-608191/192.168.39.10
Start Time:       Mon, 13 Oct 2025 14:25:06 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vtwds (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vtwds:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-bpcvp to functional-608191
Normal   Pulling    7m14s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m13s (x5 over 10m)     kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m13s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 9m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-608191 logs mysql-5bb876957f-bpcvp -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-608191 logs mysql-5bb876957f-bpcvp -n default: exit status 1 (75.767018ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-bpcvp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-608191 logs mysql-5bb876957f-bpcvp -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-608191 -n functional-608191
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 logs -n 25: (1.78288505s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                                ARGS                                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-608191 ssh sudo umount -f /mount-9p                                                                                                                     │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ mount     │ -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount3 --alsologtostderr -v=1                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ ssh       │ functional-608191 ssh findmnt -T /mount1                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ mount     │ -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount1 --alsologtostderr -v=1                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ mount     │ -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount2 --alsologtostderr -v=1                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ ssh       │ functional-608191 ssh findmnt -T /mount1                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ ssh       │ functional-608191 ssh findmnt -T /mount2                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ ssh       │ functional-608191 ssh findmnt -T /mount3                                                                                                                           │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ mount     │ -p functional-608191 --kill=true                                                                                                                                   │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │                     │
	│ image     │ functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image save kicbase/echo-server:functional-608191 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image rm kicbase/echo-server:functional-608191 --alsologtostderr                                                                                 │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image ls                                                                                                                                         │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ image     │ functional-608191 image save --daemon kicbase/echo-server:functional-608191 --alsologtostderr                                                                      │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:25 UTC │ 13 Oct 25 14:25 UTC │
	│ start     │ -p functional-608191 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                          │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	│ start     │ -p functional-608191 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                          │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	│ start     │ -p functional-608191 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false                                    │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-608191 --alsologtostderr -v=1                                                                                                     │ functional-608191 │ jenkins │ v1.37.0 │ 13 Oct 25 14:31 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 14:31:19
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 14:31:19.291613 1831942 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:31:19.291999 1831942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:31:19.292017 1831942 out.go:374] Setting ErrFile to fd 2...
	I1013 14:31:19.292025 1831942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:31:19.292396 1831942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 14:31:19.293045 1831942 out.go:368] Setting JSON to false
	I1013 14:31:19.294312 1831942 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":22427,"bootTime":1760343452,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 14:31:19.294428 1831942 start.go:141] virtualization: kvm guest
	I1013 14:31:19.296444 1831942 out.go:179] * [functional-608191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 14:31:19.297978 1831942 notify.go:220] Checking for updates...
	I1013 14:31:19.297983 1831942 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 14:31:19.299274 1831942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 14:31:19.300464 1831942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 14:31:19.301569 1831942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 14:31:19.302616 1831942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 14:31:19.303778 1831942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 14:31:19.305317 1831942 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 14:31:19.305931 1831942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:31:19.305984 1831942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:31:19.320114 1831942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I1013 14:31:19.320672 1831942 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:31:19.321379 1831942 main.go:141] libmachine: Using API Version  1
	I1013 14:31:19.321408 1831942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:31:19.321835 1831942 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:31:19.322029 1831942 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:31:19.322314 1831942 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 14:31:19.322636 1831942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:31:19.322674 1831942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:31:19.337144 1831942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I1013 14:31:19.337704 1831942 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:31:19.338258 1831942 main.go:141] libmachine: Using API Version  1
	I1013 14:31:19.338283 1831942 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:31:19.338647 1831942 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:31:19.338878 1831942 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:31:19.371631 1831942 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 14:31:19.373087 1831942 start.go:305] selected driver: kvm2
	I1013 14:31:19.373106 1831942 start.go:925] validating driver "kvm2" against &{Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:31:19.373215 1831942 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 14:31:19.374294 1831942 cni.go:84] Creating CNI manager for ""
	I1013 14:31:19.374351 1831942 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 14:31:19.374397 1831942 start.go:349] cluster config:
	{Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:31:19.376483 1831942 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b3815b3d85db       56cc512116c8f       9 minutes ago       Exited              mount-munger              0                   e1b3239f98d8c       busybox-mount
	54e018365168b       c3994bc696102       10 minutes ago      Running             kube-apiserver            1                   ae5cac3c5f135       kube-apiserver-functional-608191
	73c62ac23dcef       52546a367cc9e       10 minutes ago      Running             coredns                   2                   79d79fc021a2c       coredns-66bc5c9577-b59r9
	0bdcff79b6f2e       6e38f40d628db       10 minutes ago      Running             storage-provisioner       3                   31e2b1fefe43d       storage-provisioner
	9923b9c3b6134       c3994bc696102       10 minutes ago      Exited              kube-apiserver            0                   ae5cac3c5f135       kube-apiserver-functional-608191
	e3f11c67de677       c80c8dbafe7dd       10 minutes ago      Running             kube-controller-manager   2                   661659159fd35       kube-controller-manager-functional-608191
	552b6794b2ecf       7dd6aaa1717ab       10 minutes ago      Running             kube-scheduler            2                   d8c82bf329c20       kube-scheduler-functional-608191
	19906e68c850c       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       2                   31e2b1fefe43d       storage-provisioner
	b3d48b09ac4ab       fc25172553d79       10 minutes ago      Running             kube-proxy                2                   cccbb832d47ca       kube-proxy-cd8b5
	c9db6877437dc       5f1f5298c888d       10 minutes ago      Running             etcd                      2                   1136f8cb2bfda       etcd-functional-608191
	ccd1d671f4ad2       c80c8dbafe7dd       11 minutes ago      Exited              kube-controller-manager   1                   661659159fd35       kube-controller-manager-functional-608191
	20139c80c2b89       7dd6aaa1717ab       11 minutes ago      Exited              kube-scheduler            1                   d8c82bf329c20       kube-scheduler-functional-608191
	0ff2c0af6db42       5f1f5298c888d       11 minutes ago      Exited              etcd                      1                   1136f8cb2bfda       etcd-functional-608191
	242b510b56dc9       fc25172553d79       11 minutes ago      Exited              kube-proxy                1                   cccbb832d47ca       kube-proxy-cd8b5
	72508a8901416       52546a367cc9e       11 minutes ago      Exited              coredns                   1                   79d79fc021a2c       coredns-66bc5c9577-b59r9
	
	
	==> containerd <==
	Oct 13 14:32:02 functional-608191 containerd[4454]: time="2025-10-13T14:32:02.931671465Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 13 14:32:02 functional-608191 containerd[4454]: time="2025-10-13T14:32:02.935992878Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:03 functional-608191 containerd[4454]: time="2025-10-13T14:32:03.018444095Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:03 functional-608191 containerd[4454]: time="2025-10-13T14:32:03.119633468Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:32:03 functional-608191 containerd[4454]: time="2025-10-13T14:32:03.119762092Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 13 14:32:42 functional-608191 containerd[4454]: time="2025-10-13T14:32:42.930835811Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 14:32:42 functional-608191 containerd[4454]: time="2025-10-13T14:32:42.935019289Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:43 functional-608191 containerd[4454]: time="2025-10-13T14:32:43.002007856Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:43 functional-608191 containerd[4454]: time="2025-10-13T14:32:43.099670214Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:32:43 functional-608191 containerd[4454]: time="2025-10-13T14:32:43.099737744Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	Oct 13 14:32:55 functional-608191 containerd[4454]: time="2025-10-13T14:32:55.930848465Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 13 14:32:55 functional-608191 containerd[4454]: time="2025-10-13T14:32:55.934068289Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:55 functional-608191 containerd[4454]: time="2025-10-13T14:32:55.997808775Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:32:56 functional-608191 containerd[4454]: time="2025-10-13T14:32:56.108129753Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:32:56 functional-608191 containerd[4454]: time="2025-10-13T14:32:56.108256224Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 13 14:34:11 functional-608191 containerd[4454]: time="2025-10-13T14:34:11.930086814Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 14:34:11 functional-608191 containerd[4454]: time="2025-10-13T14:34:11.933230811Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:34:12 functional-608191 containerd[4454]: time="2025-10-13T14:34:12.008011644Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:34:12 functional-608191 containerd[4454]: time="2025-10-13T14:34:12.107862083Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:34:12 functional-608191 containerd[4454]: time="2025-10-13T14:34:12.107946833Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 13 14:34:21 functional-608191 containerd[4454]: time="2025-10-13T14:34:21.929779525Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 13 14:34:21 functional-608191 containerd[4454]: time="2025-10-13T14:34:21.933836309Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:34:22 functional-608191 containerd[4454]: time="2025-10-13T14:34:22.021537437Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 14:34:22 functional-608191 containerd[4454]: time="2025-10-13T14:34:22.117503535Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 14:34:22 functional-608191 containerd[4454]: time="2025-10-13T14:34:22.117693342Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	
	
	==> coredns [72508a89014167f9db6746deacadcc39d3ca4514e93ad689f070711e8fae5dde] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36858 - 65360 "HINFO IN 3005092589584362483.1560966083017627098. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026785639s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [73c62ac23dcef061db1a2cf49c532093463ee196addc24e97307ab20dcf5aeec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35999 - 64742 "HINFO IN 8601583101275943645.7322847173454900088. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031744201s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	
	
	==> describe nodes <==
	Name:               functional-608191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-608191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=functional-608191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T14_22_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 14:22:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-608191
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 14:35:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 14:30:47 +0000   Mon, 13 Oct 2025 14:22:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    functional-608191
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3422538a8174bd0af79b99fa0817bbd
	  System UUID:                f3422538-a817-4bd0-af79-b99fa0817bbd
	  Boot ID:                    fe252248-25b4-47d2-aaf1-51a9660115e5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7d8vj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  default                     hello-node-connect-7d85dfc575-6qw7q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-bpcvp                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-b59r9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-608191                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-608191              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-608191     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cd8b5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-608191              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wfr2r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-52xnc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-608191 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node functional-608191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node functional-608191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node functional-608191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-608191 event: Registered Node functional-608191 in Controller
	
	
	==> dmesg <==
	[  +1.179092] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085500] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.109826] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.093375] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.131370] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.239173] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.906606] kauditd_printk_skb: 283 callbacks suppressed
	[Oct13 14:23] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.987192] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.060246] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.772942] kauditd_printk_skb: 28 callbacks suppressed
	[  +9.876433] kauditd_printk_skb: 18 callbacks suppressed
	[  +2.906041] kauditd_printk_skb: 66 callbacks suppressed
	[Oct13 14:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.122131] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.245935] kauditd_printk_skb: 108 callbacks suppressed
	[  +4.172113] kauditd_printk_skb: 67 callbacks suppressed
	[  +8.058295] kauditd_printk_skb: 143 callbacks suppressed
	[Oct13 14:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.013503] kauditd_printk_skb: 72 callbacks suppressed
	[  +3.195165] kauditd_printk_skb: 129 callbacks suppressed
	[  +9.784794] kauditd_printk_skb: 45 callbacks suppressed
	[Oct13 14:31] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [0ff2c0af6db4287d8fb0f21ac68b4d418f30aca39c92b0ab7894714df34c9116] <==
	{"level":"warn","ts":"2025-10-13T14:23:50.478642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.507645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.509654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.535663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.545046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.565385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:23:50.653235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43858","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T14:24:33.216994Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T14:24:33.217137Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-608191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	{"level":"error","ts":"2025-10-13T14:24:33.217254Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T14:24:33.219298Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.219358Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f8926bd555ec3d0e","current-leader-member-id":"f8926bd555ec3d0e"}
	{"level":"info","ts":"2025-10-13T14:24:33.219480Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T14:24:33.219512Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-13T14:24:33.219213Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220399Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220436Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T14:24:33.220454Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.10:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220027Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T14:24:33.220466Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T14:24:33.220473Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.224162Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"error","ts":"2025-10-13T14:24:33.224284Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.10:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T14:24:33.224309Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2025-10-13T14:24:33.224316Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-608191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	
	
	==> etcd [c9db6877437dc31eee9418cd82cb8418bccd7b125cd05fa5d3cb86774972e283] <==
	{"level":"warn","ts":"2025-10-13T14:24:43.755356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.766429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.779720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.797671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.808981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.823706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.834745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.849532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.864251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.890234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.903686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.914674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.934259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.947959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.965331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:43.980932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.008421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.020181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.034953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.045765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.058431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T14:24:44.158722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56634","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T14:34:43.214444Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1271}
	{"level":"info","ts":"2025-10-13T14:34:43.250683Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1271,"took":"35.01677ms","hash":2707211050,"current-db-size-bytes":4263936,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2211840,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-10-13T14:34:43.250830Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2707211050,"revision":1271,"compact-revision":-1}
	
	
	==> kernel <==
	 14:35:08 up 13 min,  0 users,  load average: 0.19, 0.25, 0.22
	Linux functional-608191 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [54e018365168b8ec6573769c8afa96e9b89eb529f2d32db595e00c0895ec563b] <==
	I1013 14:24:44.955259       1 aggregator.go:171] initial CRD sync complete...
	I1013 14:24:44.955267       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 14:24:44.955275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 14:24:44.955279       1 cache.go:39] Caches are synced for autoregister controller
	I1013 14:24:44.962085       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 14:24:44.973348       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 14:24:44.983002       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 14:24:44.983039       1 policy_source.go:240] refreshing policies
	I1013 14:24:45.050478       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 14:24:45.734011       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 14:24:46.854394       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 14:24:48.297742       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 14:24:48.389233       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 14:24:48.547871       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 14:24:48.606961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 14:25:02.306998       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.126.250"}
	I1013 14:25:06.844289       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.60.71"}
	I1013 14:25:08.658755       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.255.215"}
	I1013 14:25:24.277694       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.51.235"}
	I1013 14:31:20.356432       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 14:31:20.455832       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 14:31:20.493227       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 14:31:20.671491       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.141.103"}
	I1013 14:31:20.698510       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.255.140"}
	I1013 14:34:44.903492       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-apiserver [9923b9c3b6134565e2005a755337ee1e6d742736c6e3c9f98efee81bd4d5802c] <==
	I1013 14:24:41.642829       1 options.go:263] external host was not specified, using 192.168.39.10
	I1013 14:24:41.668518       1 server.go:150] Version: v1.34.1
	I1013 14:24:41.668782       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1013 14:24:41.675050       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [ccd1d671f4ad2cf4085af2d43460e85c051c611308642824b3391ab0bad4f158] <==
	I1013 14:23:55.330332       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 14:23:55.332979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 14:23:55.334816       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 14:23:55.338137       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 14:23:55.340515       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 14:23:55.341252       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 14:23:55.342131       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 14:23:55.342984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 14:23:55.343841       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 14:23:55.345916       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 14:23:55.345995       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 14:23:55.347514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 14:23:55.351800       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 14:23:55.354174       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 14:23:55.354237       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 14:23:55.365427       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 14:23:55.365646       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 14:23:55.365690       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 14:23:55.366053       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 14:23:55.367408       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 14:23:55.368689       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 14:23:55.368714       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 14:23:55.369361       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 14:23:55.369864       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-608191"
	I1013 14:23:55.370627       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-controller-manager [e3f11c67de677fc441824afcbe3a763614b71997830a304ba906478e55265073] <==
	I1013 14:24:48.251970       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 14:24:48.257247       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 14:24:48.264697       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 14:24:48.268125       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 14:24:48.270387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 14:24:48.277540       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 14:24:48.281492       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 14:24:48.281911       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 14:24:48.282455       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-608191"
	I1013 14:24:48.283532       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 14:24:48.285291       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 14:24:48.285421       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 14:24:48.286191       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 14:24:48.286359       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 14:24:48.286219       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 14:24:48.287912       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 14:24:48.298773       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	E1013 14:31:20.470285       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.482663       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.490869       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.498479       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.498866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.511610       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.511730       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1013 14:31:20.518744       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [242b510b56dc91101fd76daac2a0f8bb3ace19d938ba94c7d0be4582f8793e46] <==
	I1013 14:23:31.892503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 14:23:31.993145       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 14:23:31.993192       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.10"]
	E1013 14:23:31.993261       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 14:23:32.032777       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 14:23:32.032888       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 14:23:32.032925       1 server_linux.go:132] "Using iptables Proxier"
	I1013 14:23:32.044710       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 14:23:32.045212       1 server.go:527] "Version info" version="v1.34.1"
	I1013 14:23:32.045242       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:23:32.050189       1 config.go:200] "Starting service config controller"
	I1013 14:23:32.050219       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 14:23:32.050262       1 config.go:309] "Starting node config controller"
	I1013 14:23:32.050283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 14:23:32.050289       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 14:23:32.050702       1 config.go:106] "Starting endpoint slice config controller"
	I1013 14:23:32.050711       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 14:23:32.050725       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 14:23:32.050728       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 14:23:32.151068       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 14:23:32.151213       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 14:23:32.152939       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b3d48b09ac4ab7f97ae8dd7256135561a415508f359989ac4035b756c0b49b56] <==
	I1013 14:24:34.497361       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 14:24:36.901830       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 14:24:36.901992       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.10"]
	E1013 14:24:36.902089       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 14:24:36.962936       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 14:24:36.963219       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 14:24:36.963260       1 server_linux.go:132] "Using iptables Proxier"
	I1013 14:24:36.979965       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 14:24:36.982117       1 server.go:527] "Version info" version="v1.34.1"
	I1013 14:24:36.982140       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:24:36.987101       1 config.go:200] "Starting service config controller"
	I1013 14:24:36.987189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 14:24:36.987210       1 config.go:106] "Starting endpoint slice config controller"
	I1013 14:24:36.987213       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 14:24:36.987227       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 14:24:36.987230       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 14:24:36.989952       1 config.go:309] "Starting node config controller"
	I1013 14:24:36.989984       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 14:24:36.989991       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 14:24:37.087813       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 14:24:37.087864       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 14:24:37.087892       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [20139c80c2b895697ea34ac073bbea54df573b9ea3f8dffa245163ab00715e18] <==
	I1013 14:23:52.516984       1 serving.go:386] Generated self-signed cert in-memory
	I1013 14:23:53.392891       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 14:23:53.393645       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:23:53.416434       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 14:23:53.416479       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 14:23:53.416526       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:23:53.416539       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:23:53.416616       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.416626       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.426367       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 14:23:53.427869       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 14:23:53.517412       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:23:53.517510       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 14:23:53.522735       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:23.014501       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 14:24:23.014800       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 14:24:23.014930       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 14:24:23.015015       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:23.015038       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1013 14:24:23.015060       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 14:24:23.016307       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 14:24:23.016453       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [552b6794b2ecff0f2c2558459d0aa52965219db398dc9269aade313c2bb7c25e] <==
	I1013 14:24:42.686856       1 serving.go:386] Generated self-signed cert in-memory
	W1013 14:24:44.871016       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 14:24:44.871060       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 14:24:44.871069       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 14:24:44.871075       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 14:24:44.971082       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 14:24:44.973132       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 14:24:44.980825       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:44.980854       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 14:24:44.981656       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 14:24:44.981718       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 14:24:45.083704       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 14:34:22 functional-608191 kubelet[5339]: E1013 14:34:22.118205    5339 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r_kubernetes-dashboard(286cf1cf-2749-44d9-8cf0-71ab18f552e0): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 13 14:34:22 functional-608191 kubelet[5339]: E1013 14:34:22.118237    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab18f552e0"
	Oct 13 14:34:22 functional-608191 kubelet[5339]: E1013 14:34:22.928621    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:34:23 functional-608191 kubelet[5339]: E1013 14:34:23.928778    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:34:24 functional-608191 kubelet[5339]: E1013 14:34:24.929972    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-52xnc" podUID="7cb8a23d-7dba-44b2-b365-47a135ee0605"
	Oct 13 14:34:25 functional-608191 kubelet[5339]: E1013 14:34:25.931084    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:34:26 functional-608191 kubelet[5339]: E1013 14:34:26.928699    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:34:35 functional-608191 kubelet[5339]: E1013 14:34:35.932119    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:34:35 functional-608191 kubelet[5339]: E1013 14:34:35.933326    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:34:36 functional-608191 kubelet[5339]: E1013 14:34:36.928982    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:34:36 functional-608191 kubelet[5339]: E1013 14:34:36.931329    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab
18f552e0"
	Oct 13 14:34:38 functional-608191 kubelet[5339]: E1013 14:34:38.930915    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-52xnc" podUID="7cb8a23d-7dba-44b2-b365-47a135ee0605"
	Oct 13 14:34:39 functional-608191 kubelet[5339]: E1013 14:34:39.929086    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:34:46 functional-608191 kubelet[5339]: E1013 14:34:46.929195    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:34:47 functional-608191 kubelet[5339]: E1013 14:34:47.928763    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:34:48 functional-608191 kubelet[5339]: E1013 14:34:48.929304    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab
18f552e0"
	Oct 13 14:34:50 functional-608191 kubelet[5339]: E1013 14:34:50.929375    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:34:52 functional-608191 kubelet[5339]: E1013 14:34:52.928479    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:34:53 functional-608191 kubelet[5339]: E1013 14:34:53.931976    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-52xnc" podUID="7cb8a23d-7dba-44b2-b365-47a135ee0605"
	Oct 13 14:34:58 functional-608191 kubelet[5339]: E1013 14:34:58.928672    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-7d8vj" podUID="57a285cb-fa31-4321-96bf-bbbd20c61bc2"
	Oct 13 14:34:58 functional-608191 kubelet[5339]: E1013 14:34:58.930349    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bpcvp" podUID="7939308f-4ee2-4691-9165-79aacfa8e749"
	Oct 13 14:35:02 functional-608191 kubelet[5339]: E1013 14:35:02.930289    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wfr2r" podUID="286cf1cf-2749-44d9-8cf0-71ab
18f552e0"
	Oct 13 14:35:04 functional-608191 kubelet[5339]: E1013 14:35:04.928352    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6qw7q" podUID="1804e076-c32c-4353-bff8-6c40d2b36a56"
	Oct 13 14:35:05 functional-608191 kubelet[5339]: E1013 14:35:05.928817    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e9c2282b-16f1-4201-a7d5-96801043f1ec"
	Oct 13 14:35:06 functional-608191 kubelet[5339]: E1013 14:35:06.930180    5339 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-52xnc" podUID="7cb8a23d-7dba-44b2-b365-47a135ee0605"
	
	
	==> storage-provisioner [0bdcff79b6f2eb18fd6df3944342b3f5a2cf125d450367aeaefda23398799bad] <==
	W1013 14:34:43.885527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:45.890225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:45.900443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:47.905816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:47.911914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:49.915993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:49.925079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:51.930662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:51.937634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:53.943510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:53.951341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:55.955382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:55.964879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:57.968455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:57.974874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:59.979211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:34:59.989103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:01.992631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:01.998388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:04.003511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:04.009683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:06.013113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:06.018885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:08.023100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 14:35:08.030794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [19906e68c850cc4d2665f6dca007cff3878b00054b2f9e7752b01a49703c8a5b] <==
	I1013 14:24:35.231238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 14:24:35.233267       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-608191 -n functional-608191
helpers_test.go:269: (dbg) Run:  kubectl --context functional-608191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc: exit status 1 (127.869895ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:09 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://6b3815b3d85db29741068c9a9b97514906bd1ef352cdf42ca5d2734f39a724e6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 13 Oct 2025 14:25:11 +0000
	      Finished:     Mon, 13 Oct 2025 14:25:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wpkbq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wpkbq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-608191
	  Normal  Pulling    9m59s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m58s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.486s (1.486s including waiting). Image size: 2395207 bytes.
	  Normal  Created    9m58s  kubelet            Created container: mount-munger
	  Normal  Started    9m58s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7d8vj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gctw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6gctw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m45s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7d8vj to functional-608191
	  Normal   Pulling    6m48s (x5 over 9m45s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m47s (x5 over 9m45s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m47s (x5 over 9m45s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m41s (x21 over 9m44s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m41s (x21 over 9m44s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-6qw7q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cgfsd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cgfsd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6qw7q to functional-608191
	  Normal   Pulling    7m4s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m3s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m3s (x5 over 10m)      kubelet            Error: ErrImagePull
	  Normal   BackOff    4m47s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m47s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-bpcvp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:06 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vtwds (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vtwds:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-bpcvp to functional-608191
	  Normal   Pulling    7m16s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-608191/192.168.39.10
	Start Time:       Mon, 13 Oct 2025 14:25:14 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kdqfp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kdqfp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m55s                   default-scheduler  Successfully assigned default/sp-pod to functional-608191
	  Normal   Pulling    7m2s (x5 over 9m55s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m1s (x5 over 9m55s)    kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m1s (x5 over 9m55s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m46s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m33s (x21 over 9m54s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wfr2r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-52xnc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-608191 describe pod busybox-mount hello-node-75c85bcc94-7d8vj hello-node-connect-7d85dfc575-6qw7q mysql-5bb876957f-bpcvp sp-pod dashboard-metrics-scraper-77bf4d6c4c-wfr2r kubernetes-dashboard-855c9754f9-52xnc: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (603.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-608191 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-608191 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7d8vj" [57a285cb-fa31-4321-96bf-bbbd20c61bc2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1013 14:27:20.513768 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:27:48.227565 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-608191 -n functional-608191
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-13 14:35:24.554052245 +0000 UTC m=+2415.494610630
functional_test.go:1460: (dbg) Run:  kubectl --context functional-608191 describe po hello-node-75c85bcc94-7d8vj -n default
functional_test.go:1460: (dbg) kubectl --context functional-608191 describe po hello-node-75c85bcc94-7d8vj -n default:
Name:             hello-node-75c85bcc94-7d8vj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-608191/192.168.39.10
Start Time:       Mon, 13 Oct 2025 14:25:24 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gctw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6gctw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7d8vj to functional-608191
Normal   Pulling    7m3s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m2s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-608191 logs hello-node-75c85bcc94-7d8vj -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-608191 logs hello-node-75c85bcc94-7d8vj -n default: exit status 1 (72.039975ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-7d8vj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-608191 logs hello-node-75c85bcc94-7d8vj -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 service --namespace=default --https --url hello-node: exit status 115 (314.811627ms)

                                                
                                                
-- stdout --
	https://192.168.39.10:32665
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-608191 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 service hello-node --url --format={{.IP}}: exit status 115 (325.580917ms)

                                                
                                                
-- stdout --
	192.168.39.10
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-608191 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 service hello-node --url: exit status 115 (326.86821ms)

                                                
                                                
-- stdout --
	http://192.168.39.10:32665
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-608191 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.10:32665
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (946.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 80 (15m46.837062208s)

                                                
                                                
-- stdout --
	* [calico-045564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "calico-045564" primary control-plane node in "calico-045564" cluster
	* Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 15:24:29.075491 1863457 out.go:360] Setting OutFile to fd 1 ...
	I1013 15:24:29.075798 1863457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:24:29.075810 1863457 out.go:374] Setting ErrFile to fd 2...
	I1013 15:24:29.075817 1863457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:24:29.076032 1863457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 15:24:29.076577 1863457 out.go:368] Setting JSON to false
	I1013 15:24:29.077765 1863457 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":25617,"bootTime":1760343452,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 15:24:29.077884 1863457 start.go:141] virtualization: kvm guest
	I1013 15:24:29.080135 1863457 out.go:179] * [calico-045564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 15:24:29.081921 1863457 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 15:24:29.081949 1863457 notify.go:220] Checking for updates...
	I1013 15:24:29.084766 1863457 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 15:24:29.086250 1863457 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:24:29.087738 1863457 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:24:29.089306 1863457 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 15:24:29.090802 1863457 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 15:24:29.093475 1863457 config.go:182] Loaded profile config "auto-045564": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:24:29.093592 1863457 config.go:182] Loaded profile config "kindnet-045564": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:24:29.093703 1863457 config.go:182] Loaded profile config "pause-383347": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:24:29.093847 1863457 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 15:24:29.139817 1863457 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 15:24:29.141767 1863457 start.go:305] selected driver: kvm2
	I1013 15:24:29.141798 1863457 start.go:925] validating driver "kvm2" against <nil>
	I1013 15:24:29.141820 1863457 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 15:24:29.142677 1863457 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:24:29.142809 1863457 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:24:29.164135 1863457 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:24:29.164175 1863457 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:24:29.179886 1863457 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:24:29.179942 1863457 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 15:24:29.180234 1863457 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 15:24:29.180270 1863457 cni.go:84] Creating CNI manager for "calico"
	I1013 15:24:29.180278 1863457 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1013 15:24:29.180339 1863457 start.go:349] cluster config:
	{Name:calico-045564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-045564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:24:29.180460 1863457 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:24:29.182503 1863457 out.go:179] * Starting "calico-045564" primary control-plane node in "calico-045564" cluster
	I1013 15:24:29.183837 1863457 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:24:29.183892 1863457 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 15:24:29.183908 1863457 cache.go:58] Caching tarball of preloaded images
	I1013 15:24:29.184050 1863457 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 15:24:29.184066 1863457 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 15:24:29.184200 1863457 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/config.json ...
	I1013 15:24:29.184229 1863457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/config.json: {Name:mk19ce119120a9d5c706f9ba9a4e3540288848fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:24:29.184425 1863457 start.go:360] acquireMachinesLock for calico-045564: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 15:24:30.154548 1863457 start.go:364] duration metric: took 970.078042ms to acquireMachinesLock for "calico-045564"
	I1013 15:24:30.154627 1863457 start.go:93] Provisioning new machine with config: &{Name:calico-045564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:calico-045564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:24:30.154812 1863457 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 15:24:30.156947 1863457 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1013 15:24:30.157168 1863457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:24:30.157230 1863457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:24:30.177443 1863457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I1013 15:24:30.178011 1863457 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:24:30.178614 1863457 main.go:141] libmachine: Using API Version  1
	I1013 15:24:30.178640 1863457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:24:30.179033 1863457 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:24:30.179266 1863457 main.go:141] libmachine: (calico-045564) Calling .GetMachineName
	I1013 15:24:30.179482 1863457 main.go:141] libmachine: (calico-045564) Calling .DriverName
	I1013 15:24:30.179669 1863457 start.go:159] libmachine.API.Create for "calico-045564" (driver="kvm2")
	I1013 15:24:30.179706 1863457 client.go:168] LocalClient.Create starting
	I1013 15:24:30.179763 1863457 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem
	I1013 15:24:30.179804 1863457 main.go:141] libmachine: Decoding PEM data...
	I1013 15:24:30.179820 1863457 main.go:141] libmachine: Parsing certificate...
	I1013 15:24:30.179905 1863457 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem
	I1013 15:24:30.179933 1863457 main.go:141] libmachine: Decoding PEM data...
	I1013 15:24:30.179950 1863457 main.go:141] libmachine: Parsing certificate...
	I1013 15:24:30.179972 1863457 main.go:141] libmachine: Running pre-create checks...
	I1013 15:24:30.179983 1863457 main.go:141] libmachine: (calico-045564) Calling .PreCreateCheck
	I1013 15:24:30.180432 1863457 main.go:141] libmachine: (calico-045564) Calling .GetConfigRaw
	I1013 15:24:30.181083 1863457 main.go:141] libmachine: Creating machine...
	I1013 15:24:30.181169 1863457 main.go:141] libmachine: (calico-045564) Calling .Create
	I1013 15:24:30.181534 1863457 main.go:141] libmachine: (calico-045564) creating domain...
	I1013 15:24:30.181557 1863457 main.go:141] libmachine: (calico-045564) creating network...
	I1013 15:24:30.183213 1863457 main.go:141] libmachine: (calico-045564) DBG | found existing default network
	I1013 15:24:30.183405 1863457 main.go:141] libmachine: (calico-045564) DBG | <network connections='3'>
	I1013 15:24:30.183511 1863457 main.go:141] libmachine: (calico-045564) DBG |   <name>default</name>
	I1013 15:24:30.183531 1863457 main.go:141] libmachine: (calico-045564) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 15:24:30.183540 1863457 main.go:141] libmachine: (calico-045564) DBG |   <forward mode='nat'>
	I1013 15:24:30.183548 1863457 main.go:141] libmachine: (calico-045564) DBG |     <nat>
	I1013 15:24:30.183557 1863457 main.go:141] libmachine: (calico-045564) DBG |       <port start='1024' end='65535'/>
	I1013 15:24:30.183565 1863457 main.go:141] libmachine: (calico-045564) DBG |     </nat>
	I1013 15:24:30.183575 1863457 main.go:141] libmachine: (calico-045564) DBG |   </forward>
	I1013 15:24:30.183584 1863457 main.go:141] libmachine: (calico-045564) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 15:24:30.183593 1863457 main.go:141] libmachine: (calico-045564) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 15:24:30.183620 1863457 main.go:141] libmachine: (calico-045564) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 15:24:30.183650 1863457 main.go:141] libmachine: (calico-045564) DBG |     <dhcp>
	I1013 15:24:30.183663 1863457 main.go:141] libmachine: (calico-045564) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 15:24:30.183670 1863457 main.go:141] libmachine: (calico-045564) DBG |     </dhcp>
	I1013 15:24:30.183698 1863457 main.go:141] libmachine: (calico-045564) DBG |   </ip>
	I1013 15:24:30.183742 1863457 main.go:141] libmachine: (calico-045564) DBG | </network>
	I1013 15:24:30.183757 1863457 main.go:141] libmachine: (calico-045564) DBG | 
	I1013 15:24:30.184474 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:30.184290 1863486 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a6:55:62} reservation:<nil>}
	I1013 15:24:30.185468 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:30.185370 1863486 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002761a0}
	I1013 15:24:30.185577 1863457 main.go:141] libmachine: (calico-045564) DBG | defining private network:
	I1013 15:24:30.185706 1863457 main.go:141] libmachine: (calico-045564) DBG | 
	I1013 15:24:30.185744 1863457 main.go:141] libmachine: (calico-045564) DBG | <network>
	I1013 15:24:30.185760 1863457 main.go:141] libmachine: (calico-045564) DBG |   <name>mk-calico-045564</name>
	I1013 15:24:30.185768 1863457 main.go:141] libmachine: (calico-045564) DBG |   <dns enable='no'/>
	I1013 15:24:30.185776 1863457 main.go:141] libmachine: (calico-045564) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1013 15:24:30.185785 1863457 main.go:141] libmachine: (calico-045564) DBG |     <dhcp>
	I1013 15:24:30.185793 1863457 main.go:141] libmachine: (calico-045564) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1013 15:24:30.185802 1863457 main.go:141] libmachine: (calico-045564) DBG |     </dhcp>
	I1013 15:24:30.185807 1863457 main.go:141] libmachine: (calico-045564) DBG |   </ip>
	I1013 15:24:30.185815 1863457 main.go:141] libmachine: (calico-045564) DBG | </network>
	I1013 15:24:30.185820 1863457 main.go:141] libmachine: (calico-045564) DBG | 
	I1013 15:24:30.192048 1863457 main.go:141] libmachine: (calico-045564) DBG | creating private network mk-calico-045564 192.168.50.0/24...
	I1013 15:24:30.279749 1863457 main.go:141] libmachine: (calico-045564) DBG | private network mk-calico-045564 192.168.50.0/24 created
	I1013 15:24:30.280089 1863457 main.go:141] libmachine: (calico-045564) DBG | <network>
	I1013 15:24:30.280107 1863457 main.go:141] libmachine: (calico-045564) DBG |   <name>mk-calico-045564</name>
	I1013 15:24:30.280119 1863457 main.go:141] libmachine: (calico-045564) DBG |   <uuid>d28ae583-dc42-4707-9766-f33fe4e80a38</uuid>
	I1013 15:24:30.280131 1863457 main.go:141] libmachine: (calico-045564) setting up store path in /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564 ...
	I1013 15:24:30.280139 1863457 main.go:141] libmachine: (calico-045564) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1013 15:24:30.280151 1863457 main.go:141] libmachine: (calico-045564) DBG |   <mac address='52:54:00:17:72:85'/>
	I1013 15:24:30.280161 1863457 main.go:141] libmachine: (calico-045564) DBG |   <dns enable='no'/>
	I1013 15:24:30.280174 1863457 main.go:141] libmachine: (calico-045564) building disk image from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 15:24:30.280190 1863457 main.go:141] libmachine: (calico-045564) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1013 15:24:30.280196 1863457 main.go:141] libmachine: (calico-045564) DBG |     <dhcp>
	I1013 15:24:30.280208 1863457 main.go:141] libmachine: (calico-045564) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1013 15:24:30.280222 1863457 main.go:141] libmachine: (calico-045564) DBG |     </dhcp>
	I1013 15:24:30.280272 1863457 main.go:141] libmachine: (calico-045564) DBG |   </ip>
	I1013 15:24:30.280294 1863457 main.go:141] libmachine: (calico-045564) DBG | </network>
	I1013 15:24:30.280313 1863457 main.go:141] libmachine: (calico-045564) Downloading /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 15:24:30.280370 1863457 main.go:141] libmachine: (calico-045564) DBG | 
	I1013 15:24:30.280408 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:30.280088 1863486 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:24:30.599613 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:30.599459 1863486 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/id_rsa...
	I1013 15:24:30.727652 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:30.727523 1863486 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/calico-045564.rawdisk...
	I1013 15:24:30.727684 1863457 main.go:141] libmachine: (calico-045564) DBG | Writing magic tar header
	I1013 15:24:30.727699 1863457 main.go:141] libmachine: (calico-045564) DBG | Writing SSH key tar header
	I1013 15:24:30.727726 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:30.727644 1863486 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564 ...
	I1013 15:24:30.727841 1863457 main.go:141] libmachine: (calico-045564) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564
	I1013 15:24:30.727868 1863457 main.go:141] libmachine: (calico-045564) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564 (perms=drwx------)
	I1013 15:24:30.727887 1863457 main.go:141] libmachine: (calico-045564) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines
	I1013 15:24:30.727910 1863457 main.go:141] libmachine: (calico-045564) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:24:30.727922 1863457 main.go:141] libmachine: (calico-045564) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975
	I1013 15:24:30.727932 1863457 main.go:141] libmachine: (calico-045564) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 15:24:30.727939 1863457 main.go:141] libmachine: (calico-045564) DBG | checking permissions on dir: /home/jenkins
	I1013 15:24:30.727949 1863457 main.go:141] libmachine: (calico-045564) DBG | checking permissions on dir: /home
	I1013 15:24:30.727974 1863457 main.go:141] libmachine: (calico-045564) DBG | skipping /home - not owner
	I1013 15:24:30.727990 1863457 main.go:141] libmachine: (calico-045564) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines (perms=drwxr-xr-x)
	I1013 15:24:30.728012 1863457 main.go:141] libmachine: (calico-045564) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube (perms=drwxr-xr-x)
	I1013 15:24:30.728025 1863457 main.go:141] libmachine: (calico-045564) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975 (perms=drwxrwxr-x)
	I1013 15:24:30.728033 1863457 main.go:141] libmachine: (calico-045564) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 15:24:30.728041 1863457 main.go:141] libmachine: (calico-045564) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 15:24:30.728051 1863457 main.go:141] libmachine: (calico-045564) defining domain...
	I1013 15:24:30.729226 1863457 main.go:141] libmachine: (calico-045564) defining domain using XML: 
	I1013 15:24:30.729254 1863457 main.go:141] libmachine: (calico-045564) <domain type='kvm'>
	I1013 15:24:30.729265 1863457 main.go:141] libmachine: (calico-045564)   <name>calico-045564</name>
	I1013 15:24:30.729278 1863457 main.go:141] libmachine: (calico-045564)   <memory unit='MiB'>3072</memory>
	I1013 15:24:30.729308 1863457 main.go:141] libmachine: (calico-045564)   <vcpu>2</vcpu>
	I1013 15:24:30.729331 1863457 main.go:141] libmachine: (calico-045564)   <features>
	I1013 15:24:30.729343 1863457 main.go:141] libmachine: (calico-045564)     <acpi/>
	I1013 15:24:30.729352 1863457 main.go:141] libmachine: (calico-045564)     <apic/>
	I1013 15:24:30.729361 1863457 main.go:141] libmachine: (calico-045564)     <pae/>
	I1013 15:24:30.729370 1863457 main.go:141] libmachine: (calico-045564)   </features>
	I1013 15:24:30.729380 1863457 main.go:141] libmachine: (calico-045564)   <cpu mode='host-passthrough'>
	I1013 15:24:30.729395 1863457 main.go:141] libmachine: (calico-045564)   </cpu>
	I1013 15:24:30.729455 1863457 main.go:141] libmachine: (calico-045564)   <os>
	I1013 15:24:30.729481 1863457 main.go:141] libmachine: (calico-045564)     <type>hvm</type>
	I1013 15:24:30.729501 1863457 main.go:141] libmachine: (calico-045564)     <boot dev='cdrom'/>
	I1013 15:24:30.729512 1863457 main.go:141] libmachine: (calico-045564)     <boot dev='hd'/>
	I1013 15:24:30.729524 1863457 main.go:141] libmachine: (calico-045564)     <bootmenu enable='no'/>
	I1013 15:24:30.729531 1863457 main.go:141] libmachine: (calico-045564)   </os>
	I1013 15:24:30.729537 1863457 main.go:141] libmachine: (calico-045564)   <devices>
	I1013 15:24:30.729542 1863457 main.go:141] libmachine: (calico-045564)     <disk type='file' device='cdrom'>
	I1013 15:24:30.729553 1863457 main.go:141] libmachine: (calico-045564)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/boot2docker.iso'/>
	I1013 15:24:30.729562 1863457 main.go:141] libmachine: (calico-045564)       <target dev='hdc' bus='scsi'/>
	I1013 15:24:30.729568 1863457 main.go:141] libmachine: (calico-045564)       <readonly/>
	I1013 15:24:30.729577 1863457 main.go:141] libmachine: (calico-045564)     </disk>
	I1013 15:24:30.729587 1863457 main.go:141] libmachine: (calico-045564)     <disk type='file' device='disk'>
	I1013 15:24:30.729603 1863457 main.go:141] libmachine: (calico-045564)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 15:24:30.729620 1863457 main.go:141] libmachine: (calico-045564)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/calico-045564.rawdisk'/>
	I1013 15:24:30.729659 1863457 main.go:141] libmachine: (calico-045564)       <target dev='hda' bus='virtio'/>
	I1013 15:24:30.729675 1863457 main.go:141] libmachine: (calico-045564)     </disk>
	I1013 15:24:30.729685 1863457 main.go:141] libmachine: (calico-045564)     <interface type='network'>
	I1013 15:24:30.729694 1863457 main.go:141] libmachine: (calico-045564)       <source network='mk-calico-045564'/>
	I1013 15:24:30.729704 1863457 main.go:141] libmachine: (calico-045564)       <model type='virtio'/>
	I1013 15:24:30.729748 1863457 main.go:141] libmachine: (calico-045564)     </interface>
	I1013 15:24:30.729764 1863457 main.go:141] libmachine: (calico-045564)     <interface type='network'>
	I1013 15:24:30.729795 1863457 main.go:141] libmachine: (calico-045564)       <source network='default'/>
	I1013 15:24:30.729826 1863457 main.go:141] libmachine: (calico-045564)       <model type='virtio'/>
	I1013 15:24:30.729839 1863457 main.go:141] libmachine: (calico-045564)     </interface>
	I1013 15:24:30.729850 1863457 main.go:141] libmachine: (calico-045564)     <serial type='pty'>
	I1013 15:24:30.729873 1863457 main.go:141] libmachine: (calico-045564)       <target port='0'/>
	I1013 15:24:30.729882 1863457 main.go:141] libmachine: (calico-045564)     </serial>
	I1013 15:24:30.729895 1863457 main.go:141] libmachine: (calico-045564)     <console type='pty'>
	I1013 15:24:30.729905 1863457 main.go:141] libmachine: (calico-045564)       <target type='serial' port='0'/>
	I1013 15:24:30.729918 1863457 main.go:141] libmachine: (calico-045564)     </console>
	I1013 15:24:30.729928 1863457 main.go:141] libmachine: (calico-045564)     <rng model='virtio'>
	I1013 15:24:30.729937 1863457 main.go:141] libmachine: (calico-045564)       <backend model='random'>/dev/random</backend>
	I1013 15:24:30.729946 1863457 main.go:141] libmachine: (calico-045564)     </rng>
	I1013 15:24:30.729963 1863457 main.go:141] libmachine: (calico-045564)   </devices>
	I1013 15:24:30.729973 1863457 main.go:141] libmachine: (calico-045564) </domain>
	I1013 15:24:30.729987 1863457 main.go:141] libmachine: (calico-045564) 
	I1013 15:24:30.734296 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:07:45:76 in network default
	I1013 15:24:30.734979 1863457 main.go:141] libmachine: (calico-045564) starting domain...
	I1013 15:24:30.735062 1863457 main.go:141] libmachine: (calico-045564) ensuring networks are active...
	I1013 15:24:30.735079 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:30.735899 1863457 main.go:141] libmachine: (calico-045564) Ensuring network default is active
	I1013 15:24:30.736436 1863457 main.go:141] libmachine: (calico-045564) Ensuring network mk-calico-045564 is active
	I1013 15:24:30.737309 1863457 main.go:141] libmachine: (calico-045564) getting domain XML...
	I1013 15:24:30.738560 1863457 main.go:141] libmachine: (calico-045564) DBG | starting domain XML:
	I1013 15:24:30.738580 1863457 main.go:141] libmachine: (calico-045564) DBG | <domain type='kvm'>
	I1013 15:24:30.738590 1863457 main.go:141] libmachine: (calico-045564) DBG |   <name>calico-045564</name>
	I1013 15:24:30.738597 1863457 main.go:141] libmachine: (calico-045564) DBG |   <uuid>8eda102a-436e-46bd-a7b6-6e70ac4b18ad</uuid>
	I1013 15:24:30.738606 1863457 main.go:141] libmachine: (calico-045564) DBG |   <memory unit='KiB'>3145728</memory>
	I1013 15:24:30.738612 1863457 main.go:141] libmachine: (calico-045564) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1013 15:24:30.738624 1863457 main.go:141] libmachine: (calico-045564) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 15:24:30.738629 1863457 main.go:141] libmachine: (calico-045564) DBG |   <os>
	I1013 15:24:30.738638 1863457 main.go:141] libmachine: (calico-045564) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 15:24:30.738644 1863457 main.go:141] libmachine: (calico-045564) DBG |     <boot dev='cdrom'/>
	I1013 15:24:30.738651 1863457 main.go:141] libmachine: (calico-045564) DBG |     <boot dev='hd'/>
	I1013 15:24:30.738658 1863457 main.go:141] libmachine: (calico-045564) DBG |     <bootmenu enable='no'/>
	I1013 15:24:30.738685 1863457 main.go:141] libmachine: (calico-045564) DBG |   </os>
	I1013 15:24:30.738702 1863457 main.go:141] libmachine: (calico-045564) DBG |   <features>
	I1013 15:24:30.738736 1863457 main.go:141] libmachine: (calico-045564) DBG |     <acpi/>
	I1013 15:24:30.738748 1863457 main.go:141] libmachine: (calico-045564) DBG |     <apic/>
	I1013 15:24:30.738756 1863457 main.go:141] libmachine: (calico-045564) DBG |     <pae/>
	I1013 15:24:30.738765 1863457 main.go:141] libmachine: (calico-045564) DBG |   </features>
	I1013 15:24:30.738776 1863457 main.go:141] libmachine: (calico-045564) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 15:24:30.738784 1863457 main.go:141] libmachine: (calico-045564) DBG |   <clock offset='utc'/>
	I1013 15:24:30.738793 1863457 main.go:141] libmachine: (calico-045564) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 15:24:30.738801 1863457 main.go:141] libmachine: (calico-045564) DBG |   <on_reboot>restart</on_reboot>
	I1013 15:24:30.738810 1863457 main.go:141] libmachine: (calico-045564) DBG |   <on_crash>destroy</on_crash>
	I1013 15:24:30.738835 1863457 main.go:141] libmachine: (calico-045564) DBG |   <devices>
	I1013 15:24:30.738854 1863457 main.go:141] libmachine: (calico-045564) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 15:24:30.738873 1863457 main.go:141] libmachine: (calico-045564) DBG |     <disk type='file' device='cdrom'>
	I1013 15:24:30.738890 1863457 main.go:141] libmachine: (calico-045564) DBG |       <driver name='qemu' type='raw'/>
	I1013 15:24:30.738905 1863457 main.go:141] libmachine: (calico-045564) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/boot2docker.iso'/>
	I1013 15:24:30.738921 1863457 main.go:141] libmachine: (calico-045564) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 15:24:30.738931 1863457 main.go:141] libmachine: (calico-045564) DBG |       <readonly/>
	I1013 15:24:30.738941 1863457 main.go:141] libmachine: (calico-045564) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 15:24:30.738953 1863457 main.go:141] libmachine: (calico-045564) DBG |     </disk>
	I1013 15:24:30.738962 1863457 main.go:141] libmachine: (calico-045564) DBG |     <disk type='file' device='disk'>
	I1013 15:24:30.738975 1863457 main.go:141] libmachine: (calico-045564) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 15:24:30.738987 1863457 main.go:141] libmachine: (calico-045564) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/calico-045564.rawdisk'/>
	I1013 15:24:30.739000 1863457 main.go:141] libmachine: (calico-045564) DBG |       <target dev='hda' bus='virtio'/>
	I1013 15:24:30.739008 1863457 main.go:141] libmachine: (calico-045564) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 15:24:30.739015 1863457 main.go:141] libmachine: (calico-045564) DBG |     </disk>
	I1013 15:24:30.739029 1863457 main.go:141] libmachine: (calico-045564) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 15:24:30.739042 1863457 main.go:141] libmachine: (calico-045564) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 15:24:30.739063 1863457 main.go:141] libmachine: (calico-045564) DBG |     </controller>
	I1013 15:24:30.739077 1863457 main.go:141] libmachine: (calico-045564) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 15:24:30.739089 1863457 main.go:141] libmachine: (calico-045564) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 15:24:30.739104 1863457 main.go:141] libmachine: (calico-045564) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 15:24:30.739119 1863457 main.go:141] libmachine: (calico-045564) DBG |     </controller>
	I1013 15:24:30.739129 1863457 main.go:141] libmachine: (calico-045564) DBG |     <interface type='network'>
	I1013 15:24:30.739143 1863457 main.go:141] libmachine: (calico-045564) DBG |       <mac address='52:54:00:55:c8:0a'/>
	I1013 15:24:30.739152 1863457 main.go:141] libmachine: (calico-045564) DBG |       <source network='mk-calico-045564'/>
	I1013 15:24:30.739159 1863457 main.go:141] libmachine: (calico-045564) DBG |       <model type='virtio'/>
	I1013 15:24:30.739173 1863457 main.go:141] libmachine: (calico-045564) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 15:24:30.739180 1863457 main.go:141] libmachine: (calico-045564) DBG |     </interface>
	I1013 15:24:30.739190 1863457 main.go:141] libmachine: (calico-045564) DBG |     <interface type='network'>
	I1013 15:24:30.739198 1863457 main.go:141] libmachine: (calico-045564) DBG |       <mac address='52:54:00:07:45:76'/>
	I1013 15:24:30.739207 1863457 main.go:141] libmachine: (calico-045564) DBG |       <source network='default'/>
	I1013 15:24:30.739216 1863457 main.go:141] libmachine: (calico-045564) DBG |       <model type='virtio'/>
	I1013 15:24:30.739232 1863457 main.go:141] libmachine: (calico-045564) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 15:24:30.739251 1863457 main.go:141] libmachine: (calico-045564) DBG |     </interface>
	I1013 15:24:30.739262 1863457 main.go:141] libmachine: (calico-045564) DBG |     <serial type='pty'>
	I1013 15:24:30.739277 1863457 main.go:141] libmachine: (calico-045564) DBG |       <target type='isa-serial' port='0'>
	I1013 15:24:30.739289 1863457 main.go:141] libmachine: (calico-045564) DBG |         <model name='isa-serial'/>
	I1013 15:24:30.739295 1863457 main.go:141] libmachine: (calico-045564) DBG |       </target>
	I1013 15:24:30.739330 1863457 main.go:141] libmachine: (calico-045564) DBG |     </serial>
	I1013 15:24:30.739348 1863457 main.go:141] libmachine: (calico-045564) DBG |     <console type='pty'>
	I1013 15:24:30.739373 1863457 main.go:141] libmachine: (calico-045564) DBG |       <target type='serial' port='0'/>
	I1013 15:24:30.739386 1863457 main.go:141] libmachine: (calico-045564) DBG |     </console>
	I1013 15:24:30.739395 1863457 main.go:141] libmachine: (calico-045564) DBG |     <input type='mouse' bus='ps2'/>
	I1013 15:24:30.739412 1863457 main.go:141] libmachine: (calico-045564) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 15:24:30.739422 1863457 main.go:141] libmachine: (calico-045564) DBG |     <audio id='1' type='none'/>
	I1013 15:24:30.739432 1863457 main.go:141] libmachine: (calico-045564) DBG |     <memballoon model='virtio'>
	I1013 15:24:30.739443 1863457 main.go:141] libmachine: (calico-045564) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 15:24:30.739458 1863457 main.go:141] libmachine: (calico-045564) DBG |     </memballoon>
	I1013 15:24:30.739488 1863457 main.go:141] libmachine: (calico-045564) DBG |     <rng model='virtio'>
	I1013 15:24:30.739510 1863457 main.go:141] libmachine: (calico-045564) DBG |       <backend model='random'>/dev/random</backend>
	I1013 15:24:30.739528 1863457 main.go:141] libmachine: (calico-045564) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 15:24:30.739539 1863457 main.go:141] libmachine: (calico-045564) DBG |     </rng>
	I1013 15:24:30.739548 1863457 main.go:141] libmachine: (calico-045564) DBG |   </devices>
	I1013 15:24:30.739557 1863457 main.go:141] libmachine: (calico-045564) DBG | </domain>
	I1013 15:24:30.739566 1863457 main.go:141] libmachine: (calico-045564) DBG | 
	I1013 15:24:31.250505 1863457 main.go:141] libmachine: (calico-045564) waiting for domain to start...
	I1013 15:24:31.252194 1863457 main.go:141] libmachine: (calico-045564) domain is now running
	I1013 15:24:31.252288 1863457 main.go:141] libmachine: (calico-045564) waiting for IP...
	I1013 15:24:31.253094 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:31.254025 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:31.254069 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:31.254389 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:31.254432 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:31.254388 1863486 retry.go:31] will retry after 271.562608ms: waiting for domain to come up
	I1013 15:24:31.528390 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:31.529147 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:31.529177 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:31.529435 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:31.529463 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:31.529437 1863486 retry.go:31] will retry after 349.736841ms: waiting for domain to come up
	I1013 15:24:31.881569 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:31.882780 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:31.882805 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:31.883858 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:31.883954 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:31.883821 1863486 retry.go:31] will retry after 481.920319ms: waiting for domain to come up
	I1013 15:24:32.369251 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:32.370740 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:32.370775 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:32.371326 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:32.371367 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:32.371128 1863486 retry.go:31] will retry after 385.514803ms: waiting for domain to come up
	I1013 15:24:32.759090 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:32.760783 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:32.760806 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:32.761265 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:32.761323 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:32.761232 1863486 retry.go:31] will retry after 739.013176ms: waiting for domain to come up
	I1013 15:24:33.502630 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:33.503540 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:33.503602 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:33.504011 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:33.504071 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:33.504006 1863486 retry.go:31] will retry after 627.25546ms: waiting for domain to come up
	I1013 15:24:34.133908 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:34.134788 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:34.134967 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:34.135406 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:34.135435 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:34.135387 1863486 retry.go:31] will retry after 987.385805ms: waiting for domain to come up
	I1013 15:24:35.124386 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:35.125129 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:35.125227 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:35.125657 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:35.125688 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:35.125572 1863486 retry.go:31] will retry after 1.434061924s: waiting for domain to come up
	I1013 15:24:36.562274 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:36.563141 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:36.563179 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:36.563664 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:36.563694 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:36.563609 1863486 retry.go:31] will retry after 1.820669524s: waiting for domain to come up
	I1013 15:24:38.386228 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:38.386963 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:38.387011 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:38.387425 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:38.387454 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:38.387401 1863486 retry.go:31] will retry after 1.519593036s: waiting for domain to come up
	I1013 15:24:39.909101 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:39.909690 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:39.909731 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:39.910174 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:39.910198 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:39.910156 1863486 retry.go:31] will retry after 2.053957758s: waiting for domain to come up
	I1013 15:24:41.966540 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:41.968281 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:41.968322 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:41.968755 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:41.968786 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:41.968730 1863486 retry.go:31] will retry after 3.045744735s: waiting for domain to come up
	I1013 15:24:45.018236 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:45.019025 1863457 main.go:141] libmachine: (calico-045564) DBG | no network interface addresses found for domain calico-045564 (source=lease)
	I1013 15:24:45.019054 1863457 main.go:141] libmachine: (calico-045564) DBG | trying to list again with source=arp
	I1013 15:24:45.019461 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find current IP address of domain calico-045564 in network mk-calico-045564 (interfaces detected: [])
	I1013 15:24:45.019482 1863457 main.go:141] libmachine: (calico-045564) DBG | I1013 15:24:45.019429 1863486 retry.go:31] will retry after 3.018308172s: waiting for domain to come up
	I1013 15:24:48.041276 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.042278 1863457 main.go:141] libmachine: (calico-045564) found domain IP: 192.168.50.7
	I1013 15:24:48.042304 1863457 main.go:141] libmachine: (calico-045564) reserving static IP address...
	I1013 15:24:48.042317 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has current primary IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.042803 1863457 main.go:141] libmachine: (calico-045564) DBG | unable to find host DHCP lease matching {name: "calico-045564", mac: "52:54:00:55:c8:0a", ip: "192.168.50.7"} in network mk-calico-045564
	I1013 15:24:48.262550 1863457 main.go:141] libmachine: (calico-045564) DBG | Getting to WaitForSSH function...
	I1013 15:24:48.262583 1863457 main.go:141] libmachine: (calico-045564) reserved static IP address 192.168.50.7 for domain calico-045564
	I1013 15:24:48.262649 1863457 main.go:141] libmachine: (calico-045564) waiting for SSH...
	I1013 15:24:48.266682 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.267308 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:minikube Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:48.267338 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.267620 1863457 main.go:141] libmachine: (calico-045564) DBG | Using SSH client type: external
	I1013 15:24:48.267654 1863457 main.go:141] libmachine: (calico-045564) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/id_rsa (-rw-------)
	I1013 15:24:48.267694 1863457 main.go:141] libmachine: (calico-045564) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:24:48.267722 1863457 main.go:141] libmachine: (calico-045564) DBG | About to run SSH command:
	I1013 15:24:48.267741 1863457 main.go:141] libmachine: (calico-045564) DBG | exit 0
	I1013 15:24:48.411464 1863457 main.go:141] libmachine: (calico-045564) DBG | SSH cmd err, output: <nil>: 
	I1013 15:24:48.411819 1863457 main.go:141] libmachine: (calico-045564) domain creation complete
	I1013 15:24:48.412348 1863457 main.go:141] libmachine: (calico-045564) Calling .GetConfigRaw
	I1013 15:24:48.413212 1863457 main.go:141] libmachine: (calico-045564) Calling .DriverName
	I1013 15:24:48.413491 1863457 main.go:141] libmachine: (calico-045564) Calling .DriverName
	I1013 15:24:48.413722 1863457 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1013 15:24:48.413744 1863457 main.go:141] libmachine: (calico-045564) Calling .GetState
	I1013 15:24:48.415698 1863457 main.go:141] libmachine: Detecting operating system of created instance...
	I1013 15:24:48.415737 1863457 main.go:141] libmachine: Waiting for SSH to be available...
	I1013 15:24:48.415746 1863457 main.go:141] libmachine: Getting to WaitForSSH function...
	I1013 15:24:48.415753 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:24:48.419220 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.419839 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:48.419865 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.420124 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:24:48.420366 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:48.420576 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:48.420751 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:24:48.420974 1863457 main.go:141] libmachine: Using SSH client type: native
	I1013 15:24:48.421327 1863457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I1013 15:24:48.421366 1863457 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1013 15:24:48.549499 1863457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 15:24:48.549538 1863457 main.go:141] libmachine: Detecting the provisioner...
	I1013 15:24:48.549551 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:24:48.554314 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.554959 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:48.555000 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.555383 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:24:48.555683 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:48.555926 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:48.556149 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:24:48.556385 1863457 main.go:141] libmachine: Using SSH client type: native
	I1013 15:24:48.556703 1863457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I1013 15:24:48.556741 1863457 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1013 15:24:48.692726 1863457 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1013 15:24:48.692854 1863457 main.go:141] libmachine: found compatible host: buildroot
	I1013 15:24:48.692873 1863457 main.go:141] libmachine: Provisioning with buildroot...
	I1013 15:24:48.692883 1863457 main.go:141] libmachine: (calico-045564) Calling .GetMachineName
	I1013 15:24:48.693205 1863457 buildroot.go:166] provisioning hostname "calico-045564"
	I1013 15:24:48.693234 1863457 main.go:141] libmachine: (calico-045564) Calling .GetMachineName
	I1013 15:24:48.693448 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:24:48.697841 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.698379 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:48.698409 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.698647 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:24:48.698891 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:48.699066 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:48.699251 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:24:48.699485 1863457 main.go:141] libmachine: Using SSH client type: native
	I1013 15:24:48.699741 1863457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I1013 15:24:48.699757 1863457 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-045564 && echo "calico-045564" | sudo tee /etc/hostname
	I1013 15:24:48.850539 1863457 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-045564
	
	I1013 15:24:48.850595 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:24:48.854348 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.854784 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:48.854814 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.855133 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:24:48.855359 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:48.855519 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:48.855649 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:24:48.855954 1863457 main.go:141] libmachine: Using SSH client type: native
	I1013 15:24:48.856223 1863457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I1013 15:24:48.856242 1863457 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-045564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-045564/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-045564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 15:24:48.990293 1863457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 15:24:48.990350 1863457 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 15:24:48.990382 1863457 buildroot.go:174] setting up certificates
	I1013 15:24:48.990397 1863457 provision.go:84] configureAuth start
	I1013 15:24:48.990413 1863457 main.go:141] libmachine: (calico-045564) Calling .GetMachineName
	I1013 15:24:48.990787 1863457 main.go:141] libmachine: (calico-045564) Calling .GetIP
	I1013 15:24:48.994814 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.995367 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:48.995398 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.995658 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:24:48.998953 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.999466 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:48.999490 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:48.999763 1863457 provision.go:143] copyHostCerts
	I1013 15:24:48.999841 1863457 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem, removing ...
	I1013 15:24:48.999889 1863457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem
	I1013 15:24:49.000009 1863457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 15:24:49.000186 1863457 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem, removing ...
	I1013 15:24:49.000205 1863457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem
	I1013 15:24:49.000263 1863457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 15:24:49.000360 1863457 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem, removing ...
	I1013 15:24:49.000373 1863457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem
	I1013 15:24:49.000420 1863457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 15:24:49.000508 1863457 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.calico-045564 san=[127.0.0.1 192.168.50.7 calico-045564 localhost minikube]
	I1013 15:24:49.033583 1863457 provision.go:177] copyRemoteCerts
	I1013 15:24:49.033666 1863457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 15:24:49.033730 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:24:49.037790 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.038308 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:49.038346 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.038639 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:24:49.038874 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:49.039069 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:24:49.039295 1863457 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/id_rsa Username:docker}
	I1013 15:24:49.134051 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 15:24:49.175471 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 15:24:49.214166 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 15:24:49.257652 1863457 provision.go:87] duration metric: took 267.236329ms to configureAuth
	I1013 15:24:49.257691 1863457 buildroot.go:189] setting minikube options for container-runtime
	I1013 15:24:49.257937 1863457 config.go:182] Loaded profile config "calico-045564": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:24:49.257973 1863457 main.go:141] libmachine: Checking connection to Docker...
	I1013 15:24:49.257988 1863457 main.go:141] libmachine: (calico-045564) Calling .GetURL
	I1013 15:24:49.259737 1863457 main.go:141] libmachine: (calico-045564) DBG | using libvirt version 8000000
	I1013 15:24:49.263269 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.263824 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:49.263861 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.264174 1863457 main.go:141] libmachine: Docker is up and running!
	I1013 15:24:49.264194 1863457 main.go:141] libmachine: Reticulating splines...
	I1013 15:24:49.264204 1863457 client.go:171] duration metric: took 19.084468742s to LocalClient.Create
	I1013 15:24:49.264235 1863457 start.go:167] duration metric: took 19.084569882s to libmachine.API.Create "calico-045564"
	I1013 15:24:49.264249 1863457 start.go:293] postStartSetup for "calico-045564" (driver="kvm2")
	I1013 15:24:49.264263 1863457 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 15:24:49.264284 1863457 main.go:141] libmachine: (calico-045564) Calling .DriverName
	I1013 15:24:49.264614 1863457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 15:24:49.264654 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:24:49.268987 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.270058 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:49.270092 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.270370 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:24:49.270635 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:49.270869 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:24:49.271047 1863457 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/id_rsa Username:docker}
	I1013 15:24:49.374343 1863457 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 15:24:49.380465 1863457 info.go:137] Remote host: Buildroot 2025.02
	I1013 15:24:49.380500 1863457 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 15:24:49.380569 1863457 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 15:24:49.380697 1863457 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem -> 18149272.pem in /etc/ssl/certs
	I1013 15:24:49.380868 1863457 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 15:24:49.396950 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:24:49.436499 1863457 start.go:296] duration metric: took 172.219841ms for postStartSetup
	I1013 15:24:49.436557 1863457 main.go:141] libmachine: (calico-045564) Calling .GetConfigRaw
	I1013 15:24:49.437537 1863457 main.go:141] libmachine: (calico-045564) Calling .GetIP
	I1013 15:24:49.441081 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.441650 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:49.441686 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.442014 1863457 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/config.json ...
	I1013 15:24:49.442274 1863457 start.go:128] duration metric: took 19.287442967s to createHost
	I1013 15:24:49.442316 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:24:49.445435 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.445892 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:49.445920 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.446111 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:24:49.446323 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:49.446512 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:49.446744 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:24:49.446942 1863457 main.go:141] libmachine: Using SSH client type: native
	I1013 15:24:49.447195 1863457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I1013 15:24:49.447209 1863457 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 15:24:49.575908 1863457 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760369089.499110595
	
	I1013 15:24:49.575944 1863457 fix.go:216] guest clock: 1760369089.499110595
	I1013 15:24:49.575954 1863457 fix.go:229] Guest: 2025-10-13 15:24:49.499110595 +0000 UTC Remote: 2025-10-13 15:24:49.442298476 +0000 UTC m=+20.414609023 (delta=56.812119ms)
	I1013 15:24:49.575992 1863457 fix.go:200] guest clock delta is within tolerance: 56.812119ms
	I1013 15:24:49.575998 1863457 start.go:83] releasing machines lock for "calico-045564", held for 19.421416668s
	I1013 15:24:49.576029 1863457 main.go:141] libmachine: (calico-045564) Calling .DriverName
	I1013 15:24:49.576366 1863457 main.go:141] libmachine: (calico-045564) Calling .GetIP
	I1013 15:24:49.581475 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.582004 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:49.582028 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.582349 1863457 main.go:141] libmachine: (calico-045564) Calling .DriverName
	I1013 15:24:49.583083 1863457 main.go:141] libmachine: (calico-045564) Calling .DriverName
	I1013 15:24:49.583366 1863457 main.go:141] libmachine: (calico-045564) Calling .DriverName
	I1013 15:24:49.583463 1863457 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 15:24:49.583515 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:24:49.583635 1863457 ssh_runner.go:195] Run: cat /version.json
	I1013 15:24:49.583653 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:24:49.589413 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.589498 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.590017 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:49.590098 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.590443 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:49.590469 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:49.590597 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:24:49.590796 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:49.590927 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:24:49.590992 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:24:49.591178 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:24:49.591237 1863457 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/id_rsa Username:docker}
	I1013 15:24:49.591366 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:24:49.591585 1863457 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/id_rsa Username:docker}
	I1013 15:24:49.709162 1863457 ssh_runner.go:195] Run: systemctl --version
	I1013 15:24:49.721021 1863457 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 15:24:49.730931 1863457 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 15:24:49.731016 1863457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 15:24:49.771314 1863457 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 15:24:49.771352 1863457 start.go:495] detecting cgroup driver to use...
	I1013 15:24:49.771447 1863457 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 15:24:49.836589 1863457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 15:24:49.869541 1863457 docker.go:218] disabling cri-docker service (if available) ...
	I1013 15:24:49.869645 1863457 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 15:24:49.900680 1863457 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 15:24:49.926171 1863457 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 15:24:50.127658 1863457 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 15:24:50.366475 1863457 docker.go:234] disabling docker service ...
	I1013 15:24:50.366554 1863457 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 15:24:50.390318 1863457 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 15:24:50.414286 1863457 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 15:24:50.627222 1863457 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 15:24:50.810810 1863457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 15:24:50.830058 1863457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 15:24:50.861665 1863457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 15:24:50.877157 1863457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 15:24:50.892777 1863457 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 15:24:50.892862 1863457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 15:24:50.908882 1863457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:24:50.923553 1863457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 15:24:50.941369 1863457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:24:50.959662 1863457 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 15:24:50.981656 1863457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 15:24:51.001503 1863457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 15:24:51.021007 1863457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 15:24:51.041414 1863457 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 15:24:51.057701 1863457 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 15:24:51.057807 1863457 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 15:24:51.085347 1863457 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 15:24:51.104063 1863457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:24:51.288629 1863457 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 15:24:51.343750 1863457 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 15:24:51.343849 1863457 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:24:51.351654 1863457 retry.go:31] will retry after 674.043318ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 15:24:52.026628 1863457 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:24:52.034754 1863457 start.go:563] Will wait 60s for crictl version
	I1013 15:24:52.034835 1863457 ssh_runner.go:195] Run: which crictl
	I1013 15:24:52.041616 1863457 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 15:24:52.095367 1863457 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 15:24:52.095444 1863457 ssh_runner.go:195] Run: containerd --version
	I1013 15:24:52.129228 1863457 ssh_runner.go:195] Run: containerd --version
	I1013 15:24:52.162844 1863457 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 15:24:52.164965 1863457 main.go:141] libmachine: (calico-045564) Calling .GetIP
	I1013 15:24:52.169185 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:52.169712 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:24:52.169764 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:24:52.170101 1863457 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1013 15:24:52.177409 1863457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:24:52.196852 1863457 kubeadm.go:883] updating cluster {Name:calico-045564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:calico-045564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 15:24:52.196977 1863457 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:24:52.197056 1863457 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:24:52.248767 1863457 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 15:24:52.248880 1863457 ssh_runner.go:195] Run: which lz4
	I1013 15:24:52.255588 1863457 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 15:24:52.261797 1863457 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 15:24:52.261841 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (409015552 bytes)
	I1013 15:24:54.190344 1863457 containerd.go:563] duration metric: took 1.934794529s to copy over tarball
	I1013 15:24:54.190440 1863457 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 15:24:56.048373 1863457 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.857898535s)
	I1013 15:24:56.048418 1863457 containerd.go:570] duration metric: took 1.858027705s to extract the tarball
	I1013 15:24:56.048429 1863457 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 15:24:56.093933 1863457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:24:56.254879 1863457 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 15:24:56.309244 1863457 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:24:56.350048 1863457 retry.go:31] will retry after 209.005268ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T15:24:56Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1013 15:24:56.559581 1863457 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:24:56.606261 1863457 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:24:56.606306 1863457 cache_images.go:85] Images are preloaded, skipping loading
	I1013 15:24:56.606316 1863457 kubeadm.go:934] updating node { 192.168.50.7 8443 v1.34.1 containerd true true} ...
	I1013 15:24:56.606458 1863457 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-045564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-045564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1013 15:24:56.606532 1863457 ssh_runner.go:195] Run: sudo crictl info
	I1013 15:24:56.653372 1863457 cni.go:84] Creating CNI manager for "calico"
	I1013 15:24:56.653416 1863457 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 15:24:56.653457 1863457 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-045564 NodeName:calico-045564 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 15:24:56.653609 1863457 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-045564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.7"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 15:24:56.653693 1863457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 15:24:56.669175 1863457 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 15:24:56.669277 1863457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 15:24:56.685616 1863457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1013 15:24:56.711447 1863457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 15:24:56.735190 1863457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1013 15:24:56.761507 1863457 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I1013 15:24:56.767149 1863457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:24:56.785700 1863457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:24:56.954171 1863457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:24:56.991472 1863457 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564 for IP: 192.168.50.7
	I1013 15:24:56.991504 1863457 certs.go:195] generating shared ca certs ...
	I1013 15:24:56.991527 1863457 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:24:56.991798 1863457 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 15:24:56.991865 1863457 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 15:24:56.991880 1863457 certs.go:257] generating profile certs ...
	I1013 15:24:56.991944 1863457 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/client.key
	I1013 15:24:56.991967 1863457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/client.crt with IP's: []
	I1013 15:24:57.446149 1863457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/client.crt ...
	I1013 15:24:57.446191 1863457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/client.crt: {Name:mke3581863bfbabaf5e3f7f4c27700ef66a3f20f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:24:57.459356 1863457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/client.key ...
	I1013 15:24:57.459400 1863457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/client.key: {Name:mk03bcb67f9137a0078b0408e933a35618b533a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:24:57.459593 1863457 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.key.98de4ead
	I1013 15:24:57.459615 1863457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.crt.98de4ead with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.7]
	I1013 15:24:57.681342 1863457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.crt.98de4ead ...
	I1013 15:24:57.681381 1863457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.crt.98de4ead: {Name:mk247400ce156f77a6042e707a0e980ef2ab3ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:24:57.681594 1863457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.key.98de4ead ...
	I1013 15:24:57.681621 1863457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.key.98de4ead: {Name:mk9bbb619b9d68605dd9c9432605b8ded4a9f0d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:24:57.681810 1863457 certs.go:382] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.crt.98de4ead -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.crt
	I1013 15:24:57.681937 1863457 certs.go:386] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.key.98de4ead -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.key
	I1013 15:24:57.682046 1863457 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/proxy-client.key
	I1013 15:24:57.682065 1863457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/proxy-client.crt with IP's: []
	I1013 15:24:57.783760 1863457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/proxy-client.crt ...
	I1013 15:24:57.783796 1863457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/proxy-client.crt: {Name:mk8ba6ab5d0ce66d4c85cd8db9509aebe10c26f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:24:57.787993 1863457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/proxy-client.key ...
	I1013 15:24:57.788031 1863457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/proxy-client.key: {Name:mk2b3e4be382bb342fc15cf9fbcfb661bb4c31d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:24:57.788378 1863457 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem (1338 bytes)
	W1013 15:24:57.788447 1863457 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927_empty.pem, impossibly tiny 0 bytes
	I1013 15:24:57.788463 1863457 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 15:24:57.788501 1863457 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 15:24:57.788537 1863457 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 15:24:57.788574 1863457 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 15:24:57.788630 1863457 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:24:57.789266 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 15:24:57.833548 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 15:24:57.873116 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 15:24:57.910703 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 15:24:57.949011 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 15:24:57.991814 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 15:24:58.035652 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 15:24:58.078286 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/calico-045564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 15:24:58.117795 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 15:24:58.151802 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem --> /usr/share/ca-certificates/1814927.pem (1338 bytes)
	I1013 15:24:58.187905 1863457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /usr/share/ca-certificates/18149272.pem (1708 bytes)
	I1013 15:24:58.223180 1863457 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 15:24:58.250931 1863457 ssh_runner.go:195] Run: openssl version
	I1013 15:24:58.259085 1863457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 15:24:58.275804 1863457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:24:58.282931 1863457 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:24:58.283034 1863457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:24:58.292352 1863457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 15:24:58.309128 1863457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1814927.pem && ln -fs /usr/share/ca-certificates/1814927.pem /etc/ssl/certs/1814927.pem"
	I1013 15:24:58.330225 1863457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1814927.pem
	I1013 15:24:58.336831 1863457 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 14:22 /usr/share/ca-certificates/1814927.pem
	I1013 15:24:58.336923 1863457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1814927.pem
	I1013 15:24:58.345970 1863457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1814927.pem /etc/ssl/certs/51391683.0"
	I1013 15:24:58.365499 1863457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18149272.pem && ln -fs /usr/share/ca-certificates/18149272.pem /etc/ssl/certs/18149272.pem"
	I1013 15:24:58.386553 1863457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18149272.pem
	I1013 15:24:58.393188 1863457 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 14:22 /usr/share/ca-certificates/18149272.pem
	I1013 15:24:58.393280 1863457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18149272.pem
	I1013 15:24:58.401382 1863457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18149272.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 15:24:58.417265 1863457 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 15:24:58.423274 1863457 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 15:24:58.423349 1863457 kubeadm.go:400] StartCluster: {Name:calico-045564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:calico-045564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:24:58.423461 1863457 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 15:24:58.423530 1863457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:24:58.472335 1863457 cri.go:89] found id: ""
	I1013 15:24:58.472419 1863457 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 15:24:58.492365 1863457 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 15:24:58.508802 1863457 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 15:24:58.526558 1863457 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 15:24:58.526582 1863457 kubeadm.go:157] found existing configuration files:
	
	I1013 15:24:58.526640 1863457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 15:24:58.541282 1863457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 15:24:58.541367 1863457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 15:24:58.564415 1863457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 15:24:58.577376 1863457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 15:24:58.577441 1863457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 15:24:58.591383 1863457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 15:24:58.605889 1863457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 15:24:58.605954 1863457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 15:24:58.620061 1863457 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 15:24:58.633473 1863457 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 15:24:58.633564 1863457 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 15:24:58.650184 1863457 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 15:24:58.739004 1863457 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 15:24:58.739070 1863457 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 15:24:58.874029 1863457 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 15:24:58.874185 1863457 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 15:24:58.874320 1863457 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 15:24:58.885485 1863457 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 15:24:58.888643 1863457 out.go:252]   - Generating certificates and keys ...
	I1013 15:24:58.888788 1863457 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 15:24:58.888867 1863457 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 15:24:59.240824 1863457 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 15:24:59.399843 1863457 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 15:24:59.608437 1863457 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 15:24:59.871578 1863457 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 15:25:00.161743 1863457 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 15:25:00.162073 1863457 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-045564 localhost] and IPs [192.168.50.7 127.0.0.1 ::1]
	I1013 15:25:00.486552 1863457 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 15:25:00.486767 1863457 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-045564 localhost] and IPs [192.168.50.7 127.0.0.1 ::1]
	I1013 15:25:01.025675 1863457 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 15:25:01.701885 1863457 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 15:25:01.863631 1863457 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 15:25:01.863754 1863457 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 15:25:02.106038 1863457 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 15:25:02.336375 1863457 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 15:25:02.439041 1863457 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 15:25:03.073362 1863457 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 15:25:03.313659 1863457 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 15:25:03.315349 1863457 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 15:25:03.317141 1863457 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 15:25:03.319922 1863457 out.go:252]   - Booting up control plane ...
	I1013 15:25:03.320087 1863457 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 15:25:03.320222 1863457 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 15:25:03.320344 1863457 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 15:25:03.339782 1863457 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 15:25:03.339953 1863457 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 15:25:03.353617 1863457 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 15:25:03.354177 1863457 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 15:25:03.354266 1863457 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 15:25:03.560834 1863457 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 15:25:03.560993 1863457 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 15:25:04.562036 1863457 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002263601s
	I1013 15:25:04.564836 1863457 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 15:25:04.564953 1863457 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.50.7:8443/livez
	I1013 15:25:04.565137 1863457 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 15:25:04.565248 1863457 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 15:25:07.895292 1863457 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.33351787s
	I1013 15:25:08.943548 1863457 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.38225215s
	I1013 15:25:11.062386 1863457 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502522251s
	I1013 15:25:11.086889 1863457 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 15:25:11.107026 1863457 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 15:25:11.129932 1863457 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 15:25:11.130251 1863457 kubeadm.go:318] [mark-control-plane] Marking the node calico-045564 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 15:25:11.151192 1863457 kubeadm.go:318] [bootstrap-token] Using token: paativ.6hxdk3xe75yu22mb
	I1013 15:25:11.153155 1863457 out.go:252]   - Configuring RBAC rules ...
	I1013 15:25:11.153335 1863457 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 15:25:11.172556 1863457 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 15:25:11.188140 1863457 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 15:25:11.194600 1863457 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 15:25:11.198563 1863457 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 15:25:11.218044 1863457 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 15:25:11.477110 1863457 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 15:25:12.018646 1863457 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 15:25:12.472996 1863457 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 15:25:12.475173 1863457 kubeadm.go:318] 
	I1013 15:25:12.475314 1863457 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 15:25:12.475328 1863457 kubeadm.go:318] 
	I1013 15:25:12.475423 1863457 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 15:25:12.475437 1863457 kubeadm.go:318] 
	I1013 15:25:12.475470 1863457 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 15:25:12.475598 1863457 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 15:25:12.475687 1863457 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 15:25:12.475697 1863457 kubeadm.go:318] 
	I1013 15:25:12.475789 1863457 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 15:25:12.475802 1863457 kubeadm.go:318] 
	I1013 15:25:12.475865 1863457 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 15:25:12.475876 1863457 kubeadm.go:318] 
	I1013 15:25:12.475957 1863457 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 15:25:12.476109 1863457 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 15:25:12.476225 1863457 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 15:25:12.476265 1863457 kubeadm.go:318] 
	I1013 15:25:12.476391 1863457 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 15:25:12.476498 1863457 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 15:25:12.476515 1863457 kubeadm.go:318] 
	I1013 15:25:12.476634 1863457 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token paativ.6hxdk3xe75yu22mb \
	I1013 15:25:12.476842 1863457 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa \
	I1013 15:25:12.476880 1863457 kubeadm.go:318] 	--control-plane 
	I1013 15:25:12.476890 1863457 kubeadm.go:318] 
	I1013 15:25:12.477026 1863457 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 15:25:12.477036 1863457 kubeadm.go:318] 
	I1013 15:25:12.477148 1863457 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token paativ.6hxdk3xe75yu22mb \
	I1013 15:25:12.477290 1863457 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa 
	I1013 15:25:12.479154 1863457 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 15:25:12.479194 1863457 cni.go:84] Creating CNI manager for "calico"
	I1013 15:25:12.482018 1863457 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1013 15:25:12.484440 1863457 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 15:25:12.484477 1863457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1013 15:25:12.527132 1863457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 15:25:15.150386 1863457 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.623204716s)
	I1013 15:25:15.150439 1863457 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 15:25:15.150595 1863457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-045564 minikube.k8s.io/updated_at=2025_10_13T15_25_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=calico-045564 minikube.k8s.io/primary=true
	I1013 15:25:15.150775 1863457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:25:15.194780 1863457 ops.go:34] apiserver oom_adj: -16
	I1013 15:25:15.369451 1863457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:25:15.869990 1863457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:25:16.370418 1863457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:25:16.870218 1863457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:25:17.370490 1863457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:25:17.869955 1863457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:25:18.370504 1863457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:25:18.616551 1863457 kubeadm.go:1113] duration metric: took 3.466067604s to wait for elevateKubeSystemPrivileges
	I1013 15:25:18.616602 1863457 kubeadm.go:402] duration metric: took 20.193254962s to StartCluster
	I1013 15:25:18.616632 1863457 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:25:18.616782 1863457 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:25:18.618629 1863457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:25:18.618971 1863457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 15:25:18.618984 1863457 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 15:25:18.618960 1863457 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:25:18.619069 1863457 addons.go:69] Setting default-storageclass=true in profile "calico-045564"
	I1013 15:25:18.619084 1863457 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-045564"
	I1013 15:25:18.619063 1863457 addons.go:69] Setting storage-provisioner=true in profile "calico-045564"
	I1013 15:25:18.619361 1863457 addons.go:238] Setting addon storage-provisioner=true in "calico-045564"
	I1013 15:25:18.619379 1863457 config.go:182] Loaded profile config "calico-045564": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:25:18.619419 1863457 host.go:66] Checking if "calico-045564" exists ...
	I1013 15:25:18.619594 1863457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:25:18.619634 1863457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:25:18.620024 1863457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:25:18.620066 1863457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:25:18.620798 1863457 out.go:179] * Verifying Kubernetes components...
	I1013 15:25:18.621867 1863457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:25:18.650359 1863457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40159
	I1013 15:25:18.650978 1863457 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:25:18.651597 1863457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34947
	I1013 15:25:18.651706 1863457 main.go:141] libmachine: Using API Version  1
	I1013 15:25:18.651755 1863457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:25:18.652281 1863457 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:25:18.652318 1863457 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:25:18.652963 1863457 main.go:141] libmachine: Using API Version  1
	I1013 15:25:18.652994 1863457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:25:18.653058 1863457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:25:18.653100 1863457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:25:18.653486 1863457 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:25:18.653746 1863457 main.go:141] libmachine: (calico-045564) Calling .GetState
	I1013 15:25:18.658859 1863457 addons.go:238] Setting addon default-storageclass=true in "calico-045564"
	I1013 15:25:18.658920 1863457 host.go:66] Checking if "calico-045564" exists ...
	I1013 15:25:18.659306 1863457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:25:18.659364 1863457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:25:18.676411 1863457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
	I1013 15:25:18.677222 1863457 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:25:18.677971 1863457 main.go:141] libmachine: Using API Version  1
	I1013 15:25:18.677999 1863457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:25:18.678491 1863457 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:25:18.678755 1863457 main.go:141] libmachine: (calico-045564) Calling .GetState
	I1013 15:25:18.681835 1863457 main.go:141] libmachine: (calico-045564) Calling .DriverName
	I1013 15:25:18.682362 1863457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
	I1013 15:25:18.682979 1863457 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:25:18.683628 1863457 main.go:141] libmachine: Using API Version  1
	I1013 15:25:18.683653 1863457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:25:18.684145 1863457 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:25:18.684156 1863457 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 15:25:18.684847 1863457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:25:18.684910 1863457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:25:18.688922 1863457 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:25:18.688946 1863457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 15:25:18.688973 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:25:18.695146 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:25:18.695855 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:25:18.695892 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:25:18.696359 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:25:18.696653 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:25:18.696879 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:25:18.697188 1863457 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/id_rsa Username:docker}
	I1013 15:25:18.703401 1863457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
	I1013 15:25:18.704287 1863457 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:25:18.705154 1863457 main.go:141] libmachine: Using API Version  1
	I1013 15:25:18.705191 1863457 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:25:18.705648 1863457 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:25:18.706210 1863457 main.go:141] libmachine: (calico-045564) Calling .GetState
	I1013 15:25:18.709775 1863457 main.go:141] libmachine: (calico-045564) Calling .DriverName
	I1013 15:25:18.710099 1863457 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 15:25:18.710117 1863457 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 15:25:18.710142 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHHostname
	I1013 15:25:18.714952 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:25:18.715247 1863457 main.go:141] libmachine: (calico-045564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:c8:0a", ip: ""} in network mk-calico-045564: {Iface:virbr2 ExpiryTime:2025-10-13 16:24:47 +0000 UTC Type:0 Mac:52:54:00:55:c8:0a Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:calico-045564 Clientid:01:52:54:00:55:c8:0a}
	I1013 15:25:18.715272 1863457 main.go:141] libmachine: (calico-045564) DBG | domain calico-045564 has defined IP address 192.168.50.7 and MAC address 52:54:00:55:c8:0a in network mk-calico-045564
	I1013 15:25:18.715542 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHPort
	I1013 15:25:18.715740 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHKeyPath
	I1013 15:25:18.715891 1863457 main.go:141] libmachine: (calico-045564) Calling .GetSSHUsername
	I1013 15:25:18.716107 1863457 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/calico-045564/id_rsa Username:docker}
	I1013 15:25:19.001983 1863457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 15:25:19.037252 1863457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:25:19.265453 1863457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:25:19.381228 1863457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 15:25:19.796266 1863457 start.go:976] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1013 15:25:19.797702 1863457 node_ready.go:35] waiting up to 15m0s for node "calico-045564" to be "Ready" ...
	I1013 15:25:20.306729 1863457 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-045564" context rescaled to 1 replicas
	I1013 15:25:20.579008 1863457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.313505277s)
	I1013 15:25:20.579303 1863457 main.go:141] libmachine: Making call to close driver server
	I1013 15:25:20.579330 1863457 main.go:141] libmachine: (calico-045564) Calling .Close
	I1013 15:25:20.579473 1863457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.198188341s)
	I1013 15:25:20.579517 1863457 main.go:141] libmachine: Making call to close driver server
	I1013 15:25:20.579527 1863457 main.go:141] libmachine: (calico-045564) Calling .Close
	I1013 15:25:20.583149 1863457 main.go:141] libmachine: (calico-045564) DBG | Closing plugin on server side
	I1013 15:25:20.583177 1863457 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:25:20.583193 1863457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:25:20.583203 1863457 main.go:141] libmachine: Making call to close driver server
	I1013 15:25:20.583212 1863457 main.go:141] libmachine: (calico-045564) Calling .Close
	I1013 15:25:20.583145 1863457 main.go:141] libmachine: (calico-045564) DBG | Closing plugin on server side
	I1013 15:25:20.583406 1863457 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:25:20.583436 1863457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:25:20.583457 1863457 main.go:141] libmachine: Making call to close driver server
	I1013 15:25:20.583472 1863457 main.go:141] libmachine: (calico-045564) Calling .Close
	I1013 15:25:20.583921 1863457 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:25:20.583939 1863457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:25:20.583995 1863457 main.go:141] libmachine: (calico-045564) DBG | Closing plugin on server side
	I1013 15:25:20.584033 1863457 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:25:20.584046 1863457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:25:20.601329 1863457 main.go:141] libmachine: Making call to close driver server
	I1013 15:25:20.601355 1863457 main.go:141] libmachine: (calico-045564) Calling .Close
	I1013 15:25:20.601743 1863457 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:25:20.601774 1863457 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:25:20.601799 1863457 main.go:141] libmachine: (calico-045564) DBG | Closing plugin on server side
	I1013 15:25:20.603661 1863457 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 15:25:20.604943 1863457 addons.go:514] duration metric: took 1.985949501s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1013 15:25:21.801017 1863457 node_ready.go:57] node "calico-045564" has "Ready":"False" status (will retry)
	W1013 15:25:23.805563 1863457 node_ready.go:57] node "calico-045564" has "Ready":"False" status (will retry)
	W1013 15:25:25.807687 1863457 node_ready.go:57] node "calico-045564" has "Ready":"False" status (will retry)
	I1013 15:25:26.304701 1863457 node_ready.go:49] node "calico-045564" is "Ready"
	I1013 15:25:26.304769 1863457 node_ready.go:38] duration metric: took 6.506975044s for node "calico-045564" to be "Ready" ...
	I1013 15:25:26.304787 1863457 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:25:26.304867 1863457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:25:26.349030 1863457 api_server.go:72] duration metric: took 7.729932515s to wait for apiserver process to appear ...
	I1013 15:25:26.349076 1863457 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:25:26.349109 1863457 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I1013 15:25:26.357689 1863457 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I1013 15:25:26.361693 1863457 api_server.go:141] control plane version: v1.34.1
	I1013 15:25:26.361752 1863457 api_server.go:131] duration metric: took 12.666139ms to wait for apiserver health ...
	I1013 15:25:26.361766 1863457 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:25:26.372814 1863457 system_pods.go:59] 9 kube-system pods found
	I1013 15:25:26.372875 1863457 system_pods.go:61] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:26.372903 1863457 system_pods.go:61] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:26.372920 1863457 system_pods.go:61] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:26.372927 1863457 system_pods.go:61] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:26.372937 1863457 system_pods.go:61] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:26.372943 1863457 system_pods.go:61] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:26.372952 1863457 system_pods.go:61] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:26.372958 1863457 system_pods.go:61] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:26.373009 1863457 system_pods.go:61] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 15:25:26.373022 1863457 system_pods.go:74] duration metric: took 11.248649ms to wait for pod list to return data ...
	I1013 15:25:26.373038 1863457 default_sa.go:34] waiting for default service account to be created ...
	I1013 15:25:26.380856 1863457 default_sa.go:45] found service account: "default"
	I1013 15:25:26.380898 1863457 default_sa.go:55] duration metric: took 7.847084ms for default service account to be created ...
	I1013 15:25:26.380915 1863457 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 15:25:26.389694 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:26.389748 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:26.389762 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:26.389772 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:26.389779 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:26.389787 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:26.389793 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:26.389799 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:26.389808 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:26.389816 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 15:25:26.389842 1863457 retry.go:31] will retry after 252.219808ms: missing components: kube-dns
	I1013 15:25:26.665703 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:26.665784 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:26.665800 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:26.665813 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:26.665820 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:26.665827 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:26.665832 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:26.665838 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:26.665842 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:26.665849 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 15:25:26.665872 1863457 retry.go:31] will retry after 289.754728ms: missing components: kube-dns
	I1013 15:25:26.963607 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:26.963660 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:26.963683 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:26.963694 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:26.963705 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:26.963725 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:26.963732 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:26.963739 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:26.963746 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:26.963755 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 15:25:26.963783 1863457 retry.go:31] will retry after 421.169586ms: missing components: kube-dns
	I1013 15:25:27.390502 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:27.390539 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:27.390560 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:27.390574 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:27.390580 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:27.390587 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:27.390596 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:27.390601 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:27.390607 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:27.390621 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:27.390643 1863457 retry.go:31] will retry after 462.455446ms: missing components: kube-dns
	I1013 15:25:27.858853 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:27.858893 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:27.858915 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:27.858940 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:27.858947 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:27.858957 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:27.858963 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:27.858969 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:27.858977 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:27.858982 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:27.859007 1863457 retry.go:31] will retry after 676.77897ms: missing components: kube-dns
	I1013 15:25:28.542376 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:28.542412 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:28.542421 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:28.542427 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:28.542434 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:28.542439 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:28.542442 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:28.542445 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:28.542448 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:28.542451 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:28.542472 1863457 retry.go:31] will retry after 765.386687ms: missing components: kube-dns
	I1013 15:25:29.317847 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:29.317894 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:29.317911 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:29.317922 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:29.317930 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:29.317938 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:29.317952 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:29.317957 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:29.317962 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:29.317967 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:29.317990 1863457 retry.go:31] will retry after 767.440455ms: missing components: kube-dns
	I1013 15:25:30.095225 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:30.095273 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:30.095287 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:30.095297 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:30.095304 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:30.095311 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:30.095316 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:30.095322 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:30.095327 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:30.095331 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:30.095352 1863457 retry.go:31] will retry after 923.037375ms: missing components: kube-dns
	I1013 15:25:31.026408 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:31.026452 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:31.026465 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:31.026476 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:31.026482 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:31.026490 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:31.026496 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:31.026501 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:31.026506 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:31.026510 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:31.026531 1863457 retry.go:31] will retry after 1.631202791s: missing components: kube-dns
	I1013 15:25:32.664632 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:32.664682 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:32.664696 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:32.664704 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:32.664735 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:32.664748 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:32.664754 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:32.664763 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:32.664769 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:32.664778 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:32.664801 1863457 retry.go:31] will retry after 1.47232847s: missing components: kube-dns
	I1013 15:25:34.144932 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:34.144965 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:34.144973 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:34.144980 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:34.144985 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:34.144990 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:34.144995 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:34.144998 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:34.145002 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:34.145004 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:34.145020 1863457 retry.go:31] will retry after 1.990387488s: missing components: kube-dns
	I1013 15:25:36.142730 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:36.142777 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:36.142793 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:36.142804 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:36.142812 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:36.142820 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:36.142827 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:36.142839 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:36.142845 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:36.142852 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:36.142875 1863457 retry.go:31] will retry after 2.370347241s: missing components: kube-dns
	I1013 15:25:38.549978 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:38.550032 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:38.550046 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:38.550061 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:38.550067 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:38.550076 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:38.550082 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:38.550087 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:38.550092 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:38.550097 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:38.550122 1863457 retry.go:31] will retry after 3.339502434s: missing components: kube-dns
	I1013 15:25:42.002838 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:42.002890 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:42.002906 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:42.002915 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:42.002923 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:42.002930 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:42.002936 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:42.002943 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:42.002948 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:42.002972 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:42.002995 1863457 retry.go:31] will retry after 3.939058934s: missing components: kube-dns
	I1013 15:25:45.962738 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:45.962792 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:45.962807 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:45.962819 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:45.962825 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:45.962833 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:45.962841 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:45.962850 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:45.962856 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:45.962861 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:45.962887 1863457 retry.go:31] will retry after 5.537708464s: missing components: kube-dns
	I1013 15:25:51.507337 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:51.507372 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:51.507381 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:51.507387 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:51.507391 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:51.507396 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:51.507399 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:51.507403 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:51.507407 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:51.507409 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:51.507426 1863457 retry.go:31] will retry after 7.494102615s: missing components: kube-dns
	I1013 15:25:59.007087 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:25:59.007120 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:25:59.007129 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:25:59.007136 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:25:59.007140 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:25:59.007144 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:25:59.007148 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:25:59.007157 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:25:59.007161 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:25:59.007163 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:25:59.007181 1863457 retry.go:31] will retry after 8.176766045s: missing components: kube-dns
	I1013 15:26:07.192328 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:26:07.192383 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:26:07.192400 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:26:07.192411 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:26:07.192419 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:26:07.192426 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:26:07.192431 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:26:07.192440 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:26:07.192445 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:26:07.192451 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:26:07.192476 1863457 retry.go:31] will retry after 13.342083712s: missing components: kube-dns
	I1013 15:26:20.539102 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:26:20.539147 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:26:20.539159 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:26:20.539168 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:26:20.539174 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:26:20.539180 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:26:20.539185 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:26:20.539190 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:26:20.539199 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:26:20.539204 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:26:20.539231 1863457 retry.go:31] will retry after 13.204080474s: missing components: kube-dns
	I1013 15:26:33.752283 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:26:33.752340 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:26:33.752355 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:26:33.752366 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:26:33.752379 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:26:33.752388 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:26:33.752393 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:26:33.752401 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:26:33.752411 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:26:33.752417 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:26:33.752446 1863457 retry.go:31] will retry after 20.510774577s: missing components: kube-dns
	I1013 15:26:54.269686 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:26:54.269750 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:26:54.269765 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:26:54.269781 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:26:54.269786 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:26:54.269791 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:26:54.269795 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:26:54.269799 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:26:54.269803 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:26:54.269806 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:26:54.269826 1863457 retry.go:31] will retry after 18.838723397s: missing components: kube-dns
	I1013 15:27:13.115948 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:27:13.115984 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:27:13.116016 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:27:13.116024 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:27:13.116028 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:27:13.116032 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:27:13.116036 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:27:13.116040 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:27:13.116043 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:27:13.116046 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:27:13.116071 1863457 retry.go:31] will retry after 20.357381979s: missing components: kube-dns
	I1013 15:27:33.482897 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:27:33.482941 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:27:33.482951 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:27:33.482969 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:27:33.482978 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:27:33.482988 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:27:33.482994 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:27:33.483000 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:27:33.483005 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:27:33.483013 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:27:33.483034 1863457 retry.go:31] will retry after 30.172904139s: missing components: kube-dns
	I1013 15:28:03.665795 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:28:03.665853 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:28:03.665872 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:28:03.665883 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:28:03.665891 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:28:03.665899 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:28:03.665907 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:28:03.665914 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:28:03.665920 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:28:03.665936 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:28:03.665960 1863457 retry.go:31] will retry after 50.295443532s: missing components: kube-dns
	I1013 15:28:53.969435 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:28:53.969472 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:28:53.969480 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:28:53.969487 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:28:53.969491 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:28:53.969506 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:28:53.969512 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:28:53.969517 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:28:53.969522 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:28:53.969527 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:28:53.969554 1863457 retry.go:31] will retry after 47.802168607s: missing components: kube-dns
	I1013 15:29:41.777428 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:29:41.777472 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:29:41.777483 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:29:41.777490 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:29:41.777494 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:29:41.777498 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:29:41.777501 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:29:41.777505 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:29:41.777508 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:29:41.777513 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:29:41.777528 1863457 retry.go:31] will retry after 1m9.55410847s: missing components: kube-dns
	I1013 15:30:51.338095 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:30:51.338143 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:30:51.338163 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:30:51.338175 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:30:51.338180 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:30:51.338187 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:30:51.338193 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:30:51.338200 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:30:51.338206 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:30:51.338212 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:30:51.338237 1863457 retry.go:31] will retry after 1m6.355222049s: missing components: kube-dns
	I1013 15:31:57.699612 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:31:57.699658 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:31:57.699669 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:31:57.699676 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:31:57.699682 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:31:57.699688 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:31:57.699691 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:31:57.699696 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:31:57.699699 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:31:57.699704 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:31:57.699745 1863457 retry.go:31] will retry after 1m3.241795234s: missing components: kube-dns
	I1013 15:33:00.947033 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:33:00.947079 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:33:00.947095 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:33:00.947101 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:33:00.947107 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:33:00.947112 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:33:00.947116 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:33:00.947121 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:33:00.947126 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:33:00.947131 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:33:00.947155 1863457 retry.go:31] will retry after 1m14.587260369s: missing components: kube-dns
	I1013 15:34:15.541121 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:34:15.541165 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:34:15.541177 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:34:15.541184 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:34:15.541189 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:34:15.541195 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:34:15.541201 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:34:15.541207 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:34:15.541212 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:34:15.541227 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:34:15.541255 1863457 retry.go:31] will retry after 48.728471458s: missing components: kube-dns
	I1013 15:35:04.277210 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:35:04.277254 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:35:04.277267 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:35:04.277275 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:35:04.277282 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:35:04.277286 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:35:04.277290 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:35:04.277295 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:35:04.277298 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:35:04.277301 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:35:04.277322 1863457 retry.go:31] will retry after 1m13.437131411s: missing components: kube-dns
	I1013 15:36:17.721455 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:36:17.721517 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:36:17.721535 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:36:17.721546 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:36:17.721553 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:36:17.721559 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:36:17.721566 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:36:17.721574 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:36:17.721583 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:36:17.721593 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:36:17.721622 1863457 retry.go:31] will retry after 1m4.267877059s: missing components: kube-dns
	I1013 15:37:21.999382 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:37:21.999433 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:37:21.999450 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:37:21.999461 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:37:21.999467 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:37:21.999474 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:37:21.999479 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:37:21.999485 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:37:21.999494 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:37:21.999499 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:37:21.999544 1863457 retry.go:31] will retry after 50.056535157s: missing components: kube-dns
	I1013 15:38:12.062166 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:38:12.062209 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:38:12.062230 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:38:12.062237 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:38:12.062241 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:38:12.062247 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:38:12.062251 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:38:12.062255 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:38:12.062259 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:38:12.062264 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:38:12.062283 1863457 retry.go:31] will retry after 1m13.088354932s: missing components: kube-dns
	I1013 15:39:25.156348 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:39:25.156398 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:39:25.156412 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:39:25.156420 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:39:25.156424 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:39:25.156428 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:39:25.156431 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:39:25.156436 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:39:25.156439 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:39:25.156442 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:39:25.156460 1863457 retry.go:31] will retry after 50.678318996s: missing components: kube-dns
	I1013 15:40:15.841613 1863457 system_pods.go:86] 9 kube-system pods found
	I1013 15:40:15.841661 1863457 system_pods.go:89] "calico-kube-controllers-59556d9b4c-dfzgw" [a2bb060b-8a31-4816-809d-16856578f5db] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1013 15:40:15.841676 1863457 system_pods.go:89] "calico-node-t2nbm" [0a6f2f11-0851-495d-9299-bfe18caaebfe] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1013 15:40:15.841686 1863457 system_pods.go:89] "coredns-66bc5c9577-6mq26" [5d2b5c47-94eb-4335-9f4f-a5062ef37a77] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:40:15.841694 1863457 system_pods.go:89] "etcd-calico-045564" [34077589-e976-432e-b694-92c645c8da74] Running
	I1013 15:40:15.841701 1863457 system_pods.go:89] "kube-apiserver-calico-045564" [27f4d6b9-cf42-4ccc-a250-9061a6d27593] Running
	I1013 15:40:15.841782 1863457 system_pods.go:89] "kube-controller-manager-calico-045564" [82011caf-dd73-44f5-bf25-72c23205d441] Running
	I1013 15:40:15.841801 1863457 system_pods.go:89] "kube-proxy-nm4hg" [53ec8ac9-672d-48bf-bfdc-865c44d0d29f] Running
	I1013 15:40:15.841807 1863457 system_pods.go:89] "kube-scheduler-calico-045564" [97263b7c-75ac-41d8-8a72-13c0e2a76829] Running
	I1013 15:40:15.841815 1863457 system_pods.go:89] "storage-provisioner" [3c77b52a-e6dd-4ff9-a89d-ab2128a3ef9c] Running
	I1013 15:40:15.844365 1863457 out.go:203] 
	W1013 15:40:15.846088 1863457 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W1013 15:40:15.846108 1863457 out.go:285] * 
	* 
	W1013 15:40:15.847839 1863457 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 15:40:15.849790 1863457 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (946.91s)
E1013 15:44:28.862696 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:49.344633 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:45:06.217150 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:45:15.272654 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:45:23.600107 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:45:30.306425 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:45:37.947933 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:46:07.248435 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:46:29.294033 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:46:38.337321 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:46:52.228633 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:47:01.537225 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:47:17.918476 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:47:20.513691 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:47:30.316297 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:48:01.800496 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:48:24.602000 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:48:40.983123 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:49:08.365466 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:49:14.882928 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:49:24.866228 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:49:36.070598 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-c5cw9" [3c77287c-8148-47b6-a144-a38a1c954408] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-316150 -n old-k8s-version-316150
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-10-13 15:40:31.604565665 +0000 UTC m=+6322.545124048
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-316150 describe po kubernetes-dashboard-8694d4445c-c5cw9 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-316150 describe po kubernetes-dashboard-8694d4445c-c5cw9 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-c5cw9
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-316150/192.168.39.114
Start Time:       Mon, 13 Oct 2025 15:31:28 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gt5vs (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-gt5vs:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m3s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c5cw9 to old-k8s-version-316150
Warning  Failed     8m50s                  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m39s (x4 over 9m2s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     7m39s (x3 over 9m2s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m39s (x4 over 9m2s)   kubelet            Error: ErrImagePull
Warning  Failed     7m14s (x6 over 9m2s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    3m55s (x20 over 9m2s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-316150 logs kubernetes-dashboard-8694d4445c-c5cw9 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-316150 logs kubernetes-dashboard-8694d4445c-c5cw9 -n kubernetes-dashboard: exit status 1 (92.225136ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-c5cw9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-316150 logs kubernetes-dashboard-8694d4445c-c5cw9 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-316150 -n old-k8s-version-316150
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-316150 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-316150 logs -n 25: (1.977136535s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                   ARGS                                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-045564 sudo systemctl cat kubelet --no-pager                                                                                                                                                   │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                   │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                   │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl status docker --all --full --no-pager                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl cat docker --no-pager                                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/docker/daemon.json                                                                                                                                                        │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo docker system info                                                                                                                                                                 │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl cat cri-docker --no-pager                                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                           │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                     │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cri-dockerd --version                                                                                                                                                              │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl status containerd --all --full --no-pager                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl cat containerd --no-pager                                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /lib/systemd/system/containerd.service                                                                                                                                         │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/containerd/config.toml                                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo containerd config dump                                                                                                                                                             │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl status crio --all --full --no-pager                                                                                                                                      │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl cat crio --no-pager                                                                                                                                                      │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                            │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo crio config                                                                                                                                                                        │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ delete  │ -p calico-045564                                                                                                                                                                                         │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ delete  │ -p disable-driver-mounts-917680                                                                                                                                                                          │ disable-driver-mounts-917680 │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ start   │ -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1 │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 15:40:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 15:40:30.985466 1879347 out.go:360] Setting OutFile to fd 1 ...
	I1013 15:40:30.985793 1879347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:40:30.985805 1879347 out.go:374] Setting ErrFile to fd 2...
	I1013 15:40:30.985809 1879347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:40:30.986023 1879347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 15:40:30.986587 1879347 out.go:368] Setting JSON to false
	I1013 15:40:30.987896 1879347 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":26579,"bootTime":1760343452,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 15:40:30.988008 1879347 start.go:141] virtualization: kvm guest
	I1013 15:40:30.990315 1879347 out.go:179] * [default-k8s-diff-port-426789] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 15:40:30.991995 1879347 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 15:40:30.992017 1879347 notify.go:220] Checking for updates...
	I1013 15:40:30.995009 1879347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 15:40:30.996863 1879347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:40:30.998430 1879347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:40:30.999970 1879347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 15:40:31.001304 1879347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 15:40:31.003293 1879347 config.go:182] Loaded profile config "embed-certs-516717": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:40:31.003416 1879347 config.go:182] Loaded profile config "no-preload-673307": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:40:31.003518 1879347 config.go:182] Loaded profile config "old-k8s-version-316150": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1013 15:40:31.003630 1879347 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 15:40:31.043746 1879347 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 15:40:31.045311 1879347 start.go:305] selected driver: kvm2
	I1013 15:40:31.045342 1879347 start.go:925] validating driver "kvm2" against <nil>
	I1013 15:40:31.045361 1879347 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 15:40:31.046187 1879347 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:40:31.046323 1879347 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:40:31.063606 1879347 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:40:31.063642 1879347 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:40:31.081742 1879347 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:40:31.081796 1879347 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 15:40:31.082134 1879347 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 15:40:31.082165 1879347 cni.go:84] Creating CNI manager for ""
	I1013 15:40:31.082248 1879347 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:40:31.082260 1879347 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 15:40:31.082309 1879347 start.go:349] cluster config:
	{Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:40:31.082398 1879347 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:40:31.084383 1879347 out.go:179] * Starting "default-k8s-diff-port-426789" primary control-plane node in "default-k8s-diff-port-426789" cluster
	I1013 15:40:31.085994 1879347 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:40:31.086060 1879347 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 15:40:31.086072 1879347 cache.go:58] Caching tarball of preloaded images
	I1013 15:40:31.086202 1879347 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 15:40:31.086218 1879347 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 15:40:31.086350 1879347 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/config.json ...
	I1013 15:40:31.086378 1879347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/config.json: {Name:mk3ce3e9d016d5e915bf4b40059397909c76db20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:40:31.086576 1879347 start.go:360] acquireMachinesLock for default-k8s-diff-port-426789: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 15:40:31.086627 1879347 start.go:364] duration metric: took 30.495µs to acquireMachinesLock for "default-k8s-diff-port-426789"
	I1013 15:40:31.086657 1879347 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:40:31.086772 1879347 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	fabe76f9f304e       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   8d923543a8075       dashboard-metrics-scraper-5f989dc9cf-wmkk8
	32d0e35d9b3f3       6e38f40d628db       8 minutes ago       Running             storage-provisioner         2                   dcf9efc3644a7       storage-provisioner
	35f2d96795a0e       56cc512116c8f       9 minutes ago       Running             busybox                     1                   26b4d9866c81e       busybox
	5089062c0d66d       ead0a4a53df89       9 minutes ago       Running             coredns                     1                   31be3bb40145b       coredns-5dd5756b68-mqzsd
	36b9dbd691691       6e38f40d628db       9 minutes ago       Exited              storage-provisioner         1                   dcf9efc3644a7       storage-provisioner
	f60341937ecf5       ea1030da44aa1       9 minutes ago       Running             kube-proxy                  1                   d42af2a1b1d13       kube-proxy-9p78g
	3cacc4651c108       73deb9a3f7025       9 minutes ago       Running             etcd                        1                   745a2a9871009       etcd-old-k8s-version-316150
	1a5f0247ca831       4be79c38a4bab       9 minutes ago       Running             kube-controller-manager     1                   5a7fa79cb6b83       kube-controller-manager-old-k8s-version-316150
	b68de31b083f1       f6f496300a2ae       9 minutes ago       Running             kube-scheduler              1                   4ef893dccaab1       kube-scheduler-old-k8s-version-316150
	4230bf40da49d       bb5e0dde9054c       9 minutes ago       Running             kube-apiserver              1                   b34fbb8875bc4       kube-apiserver-old-k8s-version-316150
	ccb371bd1cdf3       56cc512116c8f       11 minutes ago      Exited              busybox                     0                   3a77aa5b9f86a       busybox
	fc7c532491cae       ead0a4a53df89       12 minutes ago      Exited              coredns                     0                   5589893139164       coredns-5dd5756b68-mqzsd
	17ceee916069e       ea1030da44aa1       12 minutes ago      Exited              kube-proxy                  0                   ec5dadb97769e       kube-proxy-9p78g
	a8d15b7bed39f       f6f496300a2ae       12 minutes ago      Exited              kube-scheduler              0                   ee2db656afbbc       kube-scheduler-old-k8s-version-316150
	b945924fda5ff       bb5e0dde9054c       12 minutes ago      Exited              kube-apiserver              0                   088786220f129       kube-apiserver-old-k8s-version-316150
	ea30b299d4670       4be79c38a4bab       12 minutes ago      Exited              kube-controller-manager     0                   09e6dd2da42fb       kube-controller-manager-old-k8s-version-316150
	366947a562bd9       73deb9a3f7025       12 minutes ago      Exited              etcd                        0                   e6a01a4224aaf       etcd-old-k8s-version-316150
	
	
	==> containerd <==
	Oct 13 15:34:19 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:34:19.492022190Z" level=info msg="RemoveContainer for \"66e799a58cd8115596deadc98fc37381b0e7a168b235adb375b61098427f6b7c\""
	Oct 13 15:34:19 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:34:19.499815878Z" level=info msg="RemoveContainer for \"66e799a58cd8115596deadc98fc37381b0e7a168b235adb375b61098427f6b7c\" returns successfully"
	Oct 13 15:34:24 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:34:24.541713868Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 15:34:24 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:34:24.545987470Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:34:24 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:34:24.619926305Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:34:24 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:34:24.725251384Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 15:34:24 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:34:24.725434949Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 13 15:36:55 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:36:55.539228023Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 13 15:36:55 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:36:55.545073314Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Oct 13 15:36:55 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:36:55.547496329Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Oct 13 15:36:55 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:36:55.547534349Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 13 15:37:10 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:10.541513460Z" level=info msg="CreateContainer within sandbox \"8d923543a8075eb933f4b24520d613059126837ee7e120493758e5770c8b9671\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Oct 13 15:37:10 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:10.568102757Z" level=info msg="CreateContainer within sandbox \"8d923543a8075eb933f4b24520d613059126837ee7e120493758e5770c8b9671\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"fabe76f9f304e6b9dfb5e79e564615b3ca448884f6cecc261e1ca9da5e54cac8\""
	Oct 13 15:37:10 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:10.569297852Z" level=info msg="StartContainer for \"fabe76f9f304e6b9dfb5e79e564615b3ca448884f6cecc261e1ca9da5e54cac8\""
	Oct 13 15:37:10 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:10.577710537Z" level=info msg="RemoveContainer for \"b3e6e35dafeb2a40935807bc14579a1a9bd823baaedffc675df983d62117b0b6\""
	Oct 13 15:37:10 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:10.587878118Z" level=info msg="RemoveContainer for \"b3e6e35dafeb2a40935807bc14579a1a9bd823baaedffc675df983d62117b0b6\" returns successfully"
	Oct 13 15:37:10 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:10.654871461Z" level=info msg="StartContainer for \"fabe76f9f304e6b9dfb5e79e564615b3ca448884f6cecc261e1ca9da5e54cac8\" returns successfully"
	Oct 13 15:37:10 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:10.702279892Z" level=info msg="shim disconnected" id=fabe76f9f304e6b9dfb5e79e564615b3ca448884f6cecc261e1ca9da5e54cac8 namespace=k8s.io
	Oct 13 15:37:10 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:10.702330896Z" level=warning msg="cleaning up after shim disconnected" id=fabe76f9f304e6b9dfb5e79e564615b3ca448884f6cecc261e1ca9da5e54cac8 namespace=k8s.io
	Oct 13 15:37:10 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:10.702353206Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:37:15 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:15.540320350Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 15:37:15 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:15.543786320Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:37:15 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:15.623921553Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:37:15 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:15.819687813Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 15:37:15 old-k8s-version-316150 containerd[723]: time="2025-10-13T15:37:15.819790949Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
	
	
	==> coredns [5089062c0d66d8d2bb0891904b40570503f895e820ea163ff530bd140b038057] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55854 - 30852 "HINFO IN 7018689612345149476.5664545429491209434. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.430897911s
	
	
	==> coredns [fc7c532491cae125dd07dfd28d870c58c1140f15303bd63f0a8236a2a8ddd47e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33138 - 49686 "HINFO IN 5082904134800273327.7926247720984815274. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039369031s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-316150
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-316150
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=old-k8s-version-316150
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T15_28_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 15:28:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-316150
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 15:40:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 15:36:52 +0000   Mon, 13 Oct 2025 15:28:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 15:36:52 +0000   Mon, 13 Oct 2025 15:28:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 15:36:52 +0000   Mon, 13 Oct 2025 15:28:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 15:36:52 +0000   Mon, 13 Oct 2025 15:31:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    old-k8s-version-316150
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 708b21e6a97b460da9d19590b6c950b2
	  System UUID:                708b21e6-a97b-460d-a9d1-9590b6c950b2
	  Boot ID:                    9ca4eae6-c194-4fa8-93f1-e17fa899c9e5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-5dd5756b68-mqzsd                          100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-old-k8s-version-316150                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-old-k8s-version-316150             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-old-k8s-version-316150    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9p78g                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-old-k8s-version-316150             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-57f55c9bc5-vgxlc                   100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-wmkk8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m5s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-c5cw9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 9m15s                  kube-proxy       
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node old-k8s-version-316150 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node old-k8s-version-316150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node old-k8s-version-316150 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                    kubelet          Node old-k8s-version-316150 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12m                    kubelet          Node old-k8s-version-316150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                    kubelet          Node old-k8s-version-316150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                    kubelet          Node old-k8s-version-316150 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                    node-controller  Node old-k8s-version-316150 event: Registered Node old-k8s-version-316150 in Controller
	  Normal  Starting                 9m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node old-k8s-version-316150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node old-k8s-version-316150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node old-k8s-version-316150 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m5s                   node-controller  Node old-k8s-version-316150 event: Registered Node old-k8s-version-316150 in Controller
	
	
	==> dmesg <==
	[Oct13 15:30] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000032] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000072] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Oct13 15:31] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.801773] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.100026] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.948091] kauditd_printk_skb: 144 callbacks suppressed
	[  +5.800827] kauditd_printk_skb: 110 callbacks suppressed
	[  +0.000680] kauditd_printk_skb: 134 callbacks suppressed
	[  +3.931951] kauditd_printk_skb: 140 callbacks suppressed
	[  +5.524909] kauditd_printk_skb: 103 callbacks suppressed
	[ +13.199909] kauditd_printk_skb: 28 callbacks suppressed
	[Oct13 15:32] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.986412] kauditd_printk_skb: 5 callbacks suppressed
	[ +45.980982] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:34] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:37] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [366947a562bd92c63bbe45199f9a33011768869a7f63bd76d3e6d935ead76768] <==
	{"level":"info","ts":"2025-10-13T15:28:07.564436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgPreVoteResp from 7df1350fafd42bce at term 1"}
	{"level":"info","ts":"2025-10-13T15:28:07.564467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became candidate at term 2"}
	{"level":"info","ts":"2025-10-13T15:28:07.56456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgVoteResp from 7df1350fafd42bce at term 2"}
	{"level":"info","ts":"2025-10-13T15:28:07.564666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became leader at term 2"}
	{"level":"info","ts":"2025-10-13T15:28:07.564827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7df1350fafd42bce elected leader 7df1350fafd42bce at term 2"}
	{"level":"info","ts":"2025-10-13T15:28:07.573342Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7df1350fafd42bce","local-member-attributes":"{Name:old-k8s-version-316150 ClientURLs:[https://192.168.39.114:2379]}","request-path":"/0/members/7df1350fafd42bce/attributes","cluster-id":"101f5850ef417740","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T15:28:07.5736Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T15:28:07.577779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-13T15:28:07.580498Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T15:28:07.586352Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-13T15:28:07.576155Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T15:28:07.605601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.114:2379"}
	{"level":"info","ts":"2025-10-13T15:28:07.57613Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T15:28:07.617123Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T15:28:07.625087Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T15:28:07.625156Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2025-10-13T15:28:28.752428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.8882ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3156629568393946772 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5dd5756b68-ncpdj\" mod_revision:386 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-ncpdj\" value_size:4689 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-ncpdj\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-13T15:28:28.752713Z","caller":"traceutil/trace.go:171","msg":"trace[897562314] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"215.104098ms","start":"2025-10-13T15:28:28.537585Z","end":"2025-10-13T15:28:28.752689Z","steps":["trace[897562314] 'process raft request'  (duration: 14.175826ms)","trace[897562314] 'compare'  (duration: 199.608005ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T15:28:28.753688Z","caller":"traceutil/trace.go:171","msg":"trace[1318530386] linearizableReadLoop","detail":"{readStateIndex:414; appliedIndex:413; }","duration":"136.697534ms","start":"2025-10-13T15:28:28.616982Z","end":"2025-10-13T15:28:28.753679Z","steps":["trace[1318530386] 'read index received'  (duration: 136.394397ms)","trace[1318530386] 'applied index is now lower than readState.Index'  (duration: 302.752µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T15:28:28.75416Z","caller":"traceutil/trace.go:171","msg":"trace[242076820] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"181.286244ms","start":"2025-10-13T15:28:28.572863Z","end":"2025-10-13T15:28:28.754149Z","steps":["trace[242076820] 'process raft request'  (duration: 180.584444ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:28:28.754333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.35785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:28:28.754362Z","caller":"traceutil/trace.go:171","msg":"trace[2092479342] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:400; }","duration":"137.395603ms","start":"2025-10-13T15:28:28.616958Z","end":"2025-10-13T15:28:28.754353Z","steps":["trace[2092479342] 'agreement among raft nodes before linearized reading'  (duration: 137.336885ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:28:36.183708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.341118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-5dd5756b68-mqzsd.186e168ef9eaebec\" ","response":"range_response_count:1 size:805"}
	{"level":"info","ts":"2025-10-13T15:28:36.183781Z","caller":"traceutil/trace.go:171","msg":"trace[964776480] range","detail":"{range_begin:/registry/events/kube-system/coredns-5dd5756b68-mqzsd.186e168ef9eaebec; range_end:; response_count:1; response_revision:433; }","duration":"139.43685ms","start":"2025-10-13T15:28:36.04433Z","end":"2025-10-13T15:28:36.183767Z","steps":["trace[964776480] 'range keys from in-memory index tree'  (duration: 139.125256ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:29:01.91277Z","caller":"traceutil/trace.go:171","msg":"trace[1466934641] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"132.336184ms","start":"2025-10-13T15:29:01.780413Z","end":"2025-10-13T15:29:01.912749Z","steps":["trace[1466934641] 'process raft request'  (duration: 128.962414ms)"],"step_count":1}
	
	
	==> etcd [3cacc4651c108ff6a220a86c0a82dacae18c0f0b3a7af341438f0018f9c59899] <==
	{"level":"info","ts":"2025-10-13T15:31:12.160635Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T15:31:12.161251Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T15:31:12.162724Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T15:31:12.16568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce switched to configuration voters=(9075093065618959310)"}
	{"level":"info","ts":"2025-10-13T15:31:12.166008Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","added-peer-id":"7df1350fafd42bce","added-peer-peer-urls":["https://192.168.39.114:2380"]}
	{"level":"info","ts":"2025-10-13T15:31:12.168513Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T15:31:12.168684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T15:31:12.171875Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7df1350fafd42bce","initial-advertise-peer-urls":["https://192.168.39.114:2380"],"listen-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.114:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-13T15:31:12.171994Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-13T15:31:12.176555Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2025-10-13T15:31:12.176604Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2025-10-13T15:31:13.712574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-13T15:31:13.712635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-13T15:31:13.71265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgPreVoteResp from 7df1350fafd42bce at term 2"}
	{"level":"info","ts":"2025-10-13T15:31:13.712661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became candidate at term 3"}
	{"level":"info","ts":"2025-10-13T15:31:13.712667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgVoteResp from 7df1350fafd42bce at term 3"}
	{"level":"info","ts":"2025-10-13T15:31:13.712675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became leader at term 3"}
	{"level":"info","ts":"2025-10-13T15:31:13.712682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7df1350fafd42bce elected leader 7df1350fafd42bce at term 3"}
	{"level":"info","ts":"2025-10-13T15:31:13.714689Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7df1350fafd42bce","local-member-attributes":"{Name:old-k8s-version-316150 ClientURLs:[https://192.168.39.114:2379]}","request-path":"/0/members/7df1350fafd42bce/attributes","cluster-id":"101f5850ef417740","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T15:31:13.714751Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T15:31:13.714703Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T15:31:13.716286Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.114:2379"}
	{"level":"info","ts":"2025-10-13T15:31:13.716307Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-13T15:31:13.71722Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T15:31:13.717258Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 15:40:33 up 9 min,  0 users,  load average: 0.16, 0.31, 0.23
	Linux old-k8s-version-316150 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4230bf40da49d1a0e0bbb3ee2ff2168022c001ebd5224f18d0c29f96f400ac6d] <==
	E1013 15:36:16.602254       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1013 15:36:16.603175       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1013 15:37:15.411945       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.181.153:443: connect: connection refused
	I1013 15:37:15.412056       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1013 15:37:16.602317       1 handler_proxy.go:93] no RequestInfo found in the context
	E1013 15:37:16.602461       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1013 15:37:16.602470       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:37:16.603819       1 handler_proxy.go:93] no RequestInfo found in the context
	E1013 15:37:16.603910       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1013 15:37:16.603937       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1013 15:38:15.412133       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.181.153:443: connect: connection refused
	I1013 15:38:15.412180       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1013 15:39:15.411622       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.181.153:443: connect: connection refused
	I1013 15:39:15.411694       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1013 15:39:16.603529       1 handler_proxy.go:93] no RequestInfo found in the context
	E1013 15:39:16.603598       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1013 15:39:16.603604       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:39:16.604863       1 handler_proxy.go:93] no RequestInfo found in the context
	E1013 15:39:16.604982       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1013 15:39:16.604992       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1013 15:40:15.411949       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.99.181.153:443: connect: connection refused
	I1013 15:40:15.411979       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-apiserver [b945924fda5ffb99ec3de93d1693e2c1e48d7cb52f45a57464729cd9f18e078f] <==
	I1013 15:28:11.786257       1 controller.go:624] quota admission added evaluator for: endpoints
	I1013 15:28:11.801164       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 15:28:13.389091       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1013 15:28:13.414993       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 15:28:13.439915       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1013 15:28:25.244621       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1013 15:28:25.352805       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	W1013 15:29:18.568126       1 handler_proxy.go:93] no RequestInfo found in the context
	E1013 15:29:18.568182       1 controller.go:135] adding "v1beta1.metrics.k8s.io" to AggregationController failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1013 15:29:18.568620       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I1013 15:29:18.568633       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1013 15:29:18.581415       1 handler_proxy.go:93] no RequestInfo found in the context
	E1013 15:29:18.582430       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1013 15:29:18.583182       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1013 15:29:18.583258       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I1013 15:29:18.583463       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1013 15:29:18.828493       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.99.181.153"}
	W1013 15:29:18.851553       1 handler_proxy.go:93] no RequestInfo found in the context
	E1013 15:29:18.851647       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1013 15:29:18.885299       1 handler_proxy.go:93] no RequestInfo found in the context
	E1013 15:29:18.885585       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	
	
	==> kube-controller-manager [1a5f0247ca8314309dfe7fafb226ba66c84bc90efa6aa3557fec872a03f9095c] <==
	I1013 15:35:58.749101       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1013 15:36:28.281678       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1013 15:36:28.759436       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1013 15:36:58.288421       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1013 15:36:58.770599       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1013 15:37:08.557053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="304.804µs"
	I1013 15:37:11.055618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="250.81µs"
	I1013 15:37:18.958674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.862µs"
	I1013 15:37:21.557890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="91.672µs"
	E1013 15:37:28.294300       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1013 15:37:28.781749       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1013 15:37:29.558489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="161.421µs"
	I1013 15:37:41.564587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="308.071µs"
	E1013 15:37:58.301228       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1013 15:37:58.791557       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1013 15:38:28.310060       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1013 15:38:28.800827       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1013 15:38:58.316566       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1013 15:38:58.815183       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1013 15:39:28.322523       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1013 15:39:28.829612       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1013 15:39:58.328284       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1013 15:39:58.840816       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1013 15:40:28.337752       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1013 15:40:28.851798       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [ea30b299d46708eb1d82b55f2bfd6201f9619f9c989f0104947a3d66622bec0e] <==
	I1013 15:28:25.616483       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-ncpdj"
	I1013 15:28:25.637461       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mqzsd"
	I1013 15:28:25.735613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="482.604081ms"
	I1013 15:28:25.840122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.813861ms"
	I1013 15:28:25.842755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="315.226µs"
	I1013 15:28:25.884973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="982.908µs"
	I1013 15:28:27.879445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="171.78µs"
	I1013 15:28:27.956497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="145.703µs"
	I1013 15:28:28.514304       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1013 15:28:28.762725       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-ncpdj"
	I1013 15:28:28.790280       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="269.857533ms"
	I1013 15:28:28.814324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.300523ms"
	I1013 15:28:28.815062       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="134.054µs"
	I1013 15:28:37.965913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.253µs"
	I1013 15:28:38.780606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.21µs"
	I1013 15:28:38.811521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.612µs"
	I1013 15:28:38.827703       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="268.872µs"
	I1013 15:29:06.082852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.278811ms"
	I1013 15:29:06.083507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.046µs"
	I1013 15:29:18.614651       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I1013 15:29:18.660694       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-vgxlc"
	I1013 15:29:18.694890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="80.853823ms"
	I1013 15:29:18.728539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="33.562443ms"
	I1013 15:29:18.728673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="63.378µs"
	I1013 15:29:18.755578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="268.195µs"
	
	
	==> kube-proxy [17ceee916069eb6e89605b43c66be8f6457e74dae55c9486213cf9ce8532277e] <==
	I1013 15:28:27.624208       1 server_others.go:69] "Using iptables proxy"
	I1013 15:28:28.071275       1 node.go:141] Successfully retrieved node IP: 192.168.39.114
	I1013 15:28:28.274594       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1013 15:28:28.274641       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:28:28.288435       1 server_others.go:152] "Using iptables Proxier"
	I1013 15:28:28.290174       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 15:28:28.290812       1 server.go:846] "Version info" version="v1.28.0"
	I1013 15:28:28.290832       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:28:28.302276       1 config.go:315] "Starting node config controller"
	I1013 15:28:28.302394       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 15:28:28.306645       1 config.go:188] "Starting service config controller"
	I1013 15:28:28.306719       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 15:28:28.306742       1 config.go:97] "Starting endpoint slice config controller"
	I1013 15:28:28.306746       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 15:28:28.403547       1 shared_informer.go:318] Caches are synced for node config
	I1013 15:28:28.407294       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1013 15:28:28.407347       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [f60341937ecf574c90b331ebde2cd8635c3ca4895fec5004420f4afab41b11cb] <==
	I1013 15:31:17.878980       1 server_others.go:69] "Using iptables proxy"
	I1013 15:31:17.920271       1 node.go:141] Successfully retrieved node IP: 192.168.39.114
	I1013 15:31:18.034958       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1013 15:31:18.035055       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:31:18.046065       1 server_others.go:152] "Using iptables Proxier"
	I1013 15:31:18.047280       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 15:31:18.049094       1 server.go:846] "Version info" version="v1.28.0"
	I1013 15:31:18.049185       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:31:18.056464       1 config.go:188] "Starting service config controller"
	I1013 15:31:18.056806       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 15:31:18.057052       1 config.go:315] "Starting node config controller"
	I1013 15:31:18.057509       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 15:31:18.063461       1 config.go:97] "Starting endpoint slice config controller"
	I1013 15:31:18.063870       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 15:31:18.157570       1 shared_informer.go:318] Caches are synced for service config
	I1013 15:31:18.158348       1 shared_informer.go:318] Caches are synced for node config
	I1013 15:31:18.165056       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a8d15b7bed39feabf926edb28e2401d541f6173524ae98f2bffa539c76d6bc11] <==
	W1013 15:28:10.665707       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1013 15:28:10.665803       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1013 15:28:10.720421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1013 15:28:10.720467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1013 15:28:10.732305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1013 15:28:10.732404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1013 15:28:10.760226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1013 15:28:10.760253       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1013 15:28:10.797498       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1013 15:28:10.797566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1013 15:28:10.891073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1013 15:28:10.891165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1013 15:28:10.898267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1013 15:28:10.898308       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1013 15:28:10.903665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1013 15:28:10.903704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1013 15:28:10.932775       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1013 15:28:10.932815       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 15:28:10.944428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1013 15:28:10.944475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1013 15:28:11.141259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1013 15:28:11.141304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1013 15:28:11.227375       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1013 15:28:11.227414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1013 15:28:13.965228       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b68de31b083f18aafa0973bc4619439ea2952f8bad639af83154175757ea995c] <==
	I1013 15:31:12.758977       1 serving.go:348] Generated self-signed cert in-memory
	W1013 15:31:15.530240       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 15:31:15.530308       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 15:31:15.530325       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 15:31:15.530340       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 15:31:15.584442       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1013 15:31:15.584490       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:31:15.589126       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:31:15.589960       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1013 15:31:15.591231       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1013 15:31:15.594427       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1013 15:31:15.691439       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 15:39:30 old-k8s-version-316150 kubelet[1043]: I1013 15:39:30.537762    1043 scope.go:117] "RemoveContainer" containerID="fabe76f9f304e6b9dfb5e79e564615b3ca448884f6cecc261e1ca9da5e54cac8"
	Oct 13 15:39:30 old-k8s-version-316150 kubelet[1043]: E1013 15:39:30.538176    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wmkk8_kubernetes-dashboard(d28fe1e1-a82d-4adf-8b66-313c70a3506b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wmkk8" podUID="d28fe1e1-a82d-4adf-8b66-313c70a3506b"
	Oct 13 15:39:33 old-k8s-version-316150 kubelet[1043]: E1013 15:39:33.538498    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c5cw9" podUID="3c77287c-8148-47b6-a144-a38a1c954408"
	Oct 13 15:39:34 old-k8s-version-316150 kubelet[1043]: E1013 15:39:34.538135    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgxlc" podUID="8dfeaf9a-c54b-4c32-b696-e785373a7ca6"
	Oct 13 15:39:43 old-k8s-version-316150 kubelet[1043]: I1013 15:39:43.536846    1043 scope.go:117] "RemoveContainer" containerID="fabe76f9f304e6b9dfb5e79e564615b3ca448884f6cecc261e1ca9da5e54cac8"
	Oct 13 15:39:43 old-k8s-version-316150 kubelet[1043]: E1013 15:39:43.537286    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wmkk8_kubernetes-dashboard(d28fe1e1-a82d-4adf-8b66-313c70a3506b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wmkk8" podUID="d28fe1e1-a82d-4adf-8b66-313c70a3506b"
	Oct 13 15:39:48 old-k8s-version-316150 kubelet[1043]: E1013 15:39:48.538354    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgxlc" podUID="8dfeaf9a-c54b-4c32-b696-e785373a7ca6"
	Oct 13 15:39:48 old-k8s-version-316150 kubelet[1043]: E1013 15:39:48.541276    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c5cw9" podUID="3c77287c-8148-47b6-a144-a38a1c954408"
	Oct 13 15:39:55 old-k8s-version-316150 kubelet[1043]: I1013 15:39:55.537473    1043 scope.go:117] "RemoveContainer" containerID="fabe76f9f304e6b9dfb5e79e564615b3ca448884f6cecc261e1ca9da5e54cac8"
	Oct 13 15:39:55 old-k8s-version-316150 kubelet[1043]: E1013 15:39:55.537823    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wmkk8_kubernetes-dashboard(d28fe1e1-a82d-4adf-8b66-313c70a3506b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wmkk8" podUID="d28fe1e1-a82d-4adf-8b66-313c70a3506b"
	Oct 13 15:39:59 old-k8s-version-316150 kubelet[1043]: E1013 15:39:59.537726    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c5cw9" podUID="3c77287c-8148-47b6-a144-a38a1c954408"
	Oct 13 15:40:02 old-k8s-version-316150 kubelet[1043]: E1013 15:40:02.538770    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgxlc" podUID="8dfeaf9a-c54b-4c32-b696-e785373a7ca6"
	Oct 13 15:40:09 old-k8s-version-316150 kubelet[1043]: I1013 15:40:09.536949    1043 scope.go:117] "RemoveContainer" containerID="fabe76f9f304e6b9dfb5e79e564615b3ca448884f6cecc261e1ca9da5e54cac8"
	Oct 13 15:40:09 old-k8s-version-316150 kubelet[1043]: E1013 15:40:09.537292    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wmkk8_kubernetes-dashboard(d28fe1e1-a82d-4adf-8b66-313c70a3506b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wmkk8" podUID="d28fe1e1-a82d-4adf-8b66-313c70a3506b"
	Oct 13 15:40:10 old-k8s-version-316150 kubelet[1043]: E1013 15:40:10.568474    1043 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 13 15:40:10 old-k8s-version-316150 kubelet[1043]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 13 15:40:10 old-k8s-version-316150 kubelet[1043]:         ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 13 15:40:10 old-k8s-version-316150 kubelet[1043]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 13 15:40:10 old-k8s-version-316150 kubelet[1043]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 13 15:40:11 old-k8s-version-316150 kubelet[1043]: E1013 15:40:11.538559    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c5cw9" podUID="3c77287c-8148-47b6-a144-a38a1c954408"
	Oct 13 15:40:15 old-k8s-version-316150 kubelet[1043]: E1013 15:40:15.539310    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgxlc" podUID="8dfeaf9a-c54b-4c32-b696-e785373a7ca6"
	Oct 13 15:40:23 old-k8s-version-316150 kubelet[1043]: I1013 15:40:23.538011    1043 scope.go:117] "RemoveContainer" containerID="fabe76f9f304e6b9dfb5e79e564615b3ca448884f6cecc261e1ca9da5e54cac8"
	Oct 13 15:40:23 old-k8s-version-316150 kubelet[1043]: E1013 15:40:23.538705    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wmkk8_kubernetes-dashboard(d28fe1e1-a82d-4adf-8b66-313c70a3506b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wmkk8" podUID="d28fe1e1-a82d-4adf-8b66-313c70a3506b"
	Oct 13 15:40:25 old-k8s-version-316150 kubelet[1043]: E1013 15:40:25.539697    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c5cw9" podUID="3c77287c-8148-47b6-a144-a38a1c954408"
	Oct 13 15:40:26 old-k8s-version-316150 kubelet[1043]: E1013 15:40:26.538598    1043 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vgxlc" podUID="8dfeaf9a-c54b-4c32-b696-e785373a7ca6"
	
	
	==> storage-provisioner [32d0e35d9b3f3e7646629a4399f401212e02df67ea496aa922fb4951ea5ae98d] <==
	I1013 15:32:00.700175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 15:32:00.718046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 15:32:00.718655       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1013 15:32:18.126764       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 15:32:18.127523       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-316150_72e3aa80-05bb-44b0-8400-b2653474abdb!
	I1013 15:32:18.131242       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"148bf1f6-c1d8-43f1-b91a-e61aee923bec", APIVersion:"v1", ResourceVersion:"775", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-316150_72e3aa80-05bb-44b0-8400-b2653474abdb became leader
	I1013 15:32:18.228265       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-316150_72e3aa80-05bb-44b0-8400-b2653474abdb!
	
	
	==> storage-provisioner [36b9dbd691691d65d48230776c4c108fb03133f7fc304e45eed6240194df4a9f] <==
	I1013 15:31:17.719732       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 15:31:47.753771       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-316150 -n old-k8s-version-316150
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-316150 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-vgxlc kubernetes-dashboard-8694d4445c-c5cw9
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-316150 describe pod metrics-server-57f55c9bc5-vgxlc kubernetes-dashboard-8694d4445c-c5cw9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-316150 describe pod metrics-server-57f55c9bc5-vgxlc kubernetes-dashboard-8694d4445c-c5cw9: exit status 1 (81.356948ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vgxlc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-c5cw9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-316150 describe pod metrics-server-57f55c9bc5-vgxlc kubernetes-dashboard-8694d4445c-c5cw9: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dqs5m" [3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1013 15:31:58.744629 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:01.536776 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:01.543207 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:01.554699 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:01.576213 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:01.617789 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:01.699391 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:01.861246 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:02.182997 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:02.824636 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:04.106435 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:06.668225 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:11.790642 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:17.918512 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:17.924973 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:17.936458 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:17.957931 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:17.999433 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:18.080958 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:18.242584 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:18.564495 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:19.206331 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673307 -n no-preload-673307
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-10-13 15:40:48.610007755 +0000 UTC m=+6339.550566128
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-673307 describe po kubernetes-dashboard-855c9754f9-dqs5m -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-673307 describe po kubernetes-dashboard-855c9754f9-dqs5m -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-dqs5m
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-673307/192.168.61.180
Start Time:       Mon, 13 Oct 2025 15:31:42 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lmpr6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-lmpr6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  9m10s                  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         9m6s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m to no-preload-673307
Warning  Failed            7m27s                  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling           5m59s (x5 over 9m5s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            5m59s (x4 over 9m5s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed            5m59s (x5 over 9m5s)   kubelet            Error: ErrImagePull
Warning  Failed            4m (x20 over 9m4s)     kubelet            Error: ImagePullBackOff
Normal   BackOff           3m48s (x21 over 9m4s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-673307 logs kubernetes-dashboard-855c9754f9-dqs5m -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context no-preload-673307 logs kubernetes-dashboard-855c9754f9-dqs5m -n kubernetes-dashboard: exit status 1 (83.979719ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-dqs5m" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context no-preload-673307 logs kubernetes-dashboard-855c9754f9-dqs5m -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673307 -n no-preload-673307
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-673307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-673307 logs -n 25: (1.742043457s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                   ARGS                                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-045564 sudo systemctl cat kubelet --no-pager                                                                                                                                                   │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                   │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                   │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl status docker --all --full --no-pager                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl cat docker --no-pager                                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/docker/daemon.json                                                                                                                                                        │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo docker system info                                                                                                                                                                 │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl cat cri-docker --no-pager                                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                           │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                     │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cri-dockerd --version                                                                                                                                                              │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl status containerd --all --full --no-pager                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl cat containerd --no-pager                                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /lib/systemd/system/containerd.service                                                                                                                                         │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/containerd/config.toml                                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo containerd config dump                                                                                                                                                             │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl status crio --all --full --no-pager                                                                                                                                      │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl cat crio --no-pager                                                                                                                                                      │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                            │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo crio config                                                                                                                                                                        │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ delete  │ -p calico-045564                                                                                                                                                                                         │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ delete  │ -p disable-driver-mounts-917680                                                                                                                                                                          │ disable-driver-mounts-917680 │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ start   │ -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1 │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 15:40:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 15:40:30.985466 1879347 out.go:360] Setting OutFile to fd 1 ...
	I1013 15:40:30.985793 1879347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:40:30.985805 1879347 out.go:374] Setting ErrFile to fd 2...
	I1013 15:40:30.985809 1879347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:40:30.986023 1879347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 15:40:30.986587 1879347 out.go:368] Setting JSON to false
	I1013 15:40:30.987896 1879347 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":26579,"bootTime":1760343452,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 15:40:30.988008 1879347 start.go:141] virtualization: kvm guest
	I1013 15:40:30.990315 1879347 out.go:179] * [default-k8s-diff-port-426789] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 15:40:30.991995 1879347 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 15:40:30.992017 1879347 notify.go:220] Checking for updates...
	I1013 15:40:30.995009 1879347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 15:40:30.996863 1879347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:40:30.998430 1879347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:40:30.999970 1879347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 15:40:31.001304 1879347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 15:40:31.003293 1879347 config.go:182] Loaded profile config "embed-certs-516717": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:40:31.003416 1879347 config.go:182] Loaded profile config "no-preload-673307": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:40:31.003518 1879347 config.go:182] Loaded profile config "old-k8s-version-316150": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1013 15:40:31.003630 1879347 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 15:40:31.043746 1879347 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 15:40:31.045311 1879347 start.go:305] selected driver: kvm2
	I1013 15:40:31.045342 1879347 start.go:925] validating driver "kvm2" against <nil>
	I1013 15:40:31.045361 1879347 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 15:40:31.046187 1879347 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:40:31.046323 1879347 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:40:31.063606 1879347 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:40:31.063642 1879347 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:40:31.081742 1879347 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:40:31.081796 1879347 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 15:40:31.082134 1879347 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 15:40:31.082165 1879347 cni.go:84] Creating CNI manager for ""
	I1013 15:40:31.082248 1879347 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:40:31.082260 1879347 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 15:40:31.082309 1879347 start.go:349] cluster config:
	{Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:40:31.082398 1879347 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:40:31.084383 1879347 out.go:179] * Starting "default-k8s-diff-port-426789" primary control-plane node in "default-k8s-diff-port-426789" cluster
	I1013 15:40:31.085994 1879347 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:40:31.086060 1879347 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 15:40:31.086072 1879347 cache.go:58] Caching tarball of preloaded images
	I1013 15:40:31.086202 1879347 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 15:40:31.086218 1879347 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 15:40:31.086350 1879347 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/config.json ...
	I1013 15:40:31.086378 1879347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/config.json: {Name:mk3ce3e9d016d5e915bf4b40059397909c76db20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:40:31.086576 1879347 start.go:360] acquireMachinesLock for default-k8s-diff-port-426789: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 15:40:31.086627 1879347 start.go:364] duration metric: took 30.495µs to acquireMachinesLock for "default-k8s-diff-port-426789"
	I1013 15:40:31.086657 1879347 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:40:31.086772 1879347 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 15:40:31.088669 1879347 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1013 15:40:31.088891 1879347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:40:31.088947 1879347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:40:31.104190 1879347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I1013 15:40:31.104771 1879347 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:40:31.105336 1879347 main.go:141] libmachine: Using API Version  1
	I1013 15:40:31.105364 1879347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:40:31.105824 1879347 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:40:31.106142 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:40:31.106356 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:40:31.106567 1879347 start.go:159] libmachine.API.Create for "default-k8s-diff-port-426789" (driver="kvm2")
	I1013 15:40:31.106603 1879347 client.go:168] LocalClient.Create starting
	I1013 15:40:31.106653 1879347 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem
	I1013 15:40:31.106700 1879347 main.go:141] libmachine: Decoding PEM data...
	I1013 15:40:31.106743 1879347 main.go:141] libmachine: Parsing certificate...
	I1013 15:40:31.106828 1879347 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem
	I1013 15:40:31.106855 1879347 main.go:141] libmachine: Decoding PEM data...
	I1013 15:40:31.106876 1879347 main.go:141] libmachine: Parsing certificate...
	I1013 15:40:31.106902 1879347 main.go:141] libmachine: Running pre-create checks...
	I1013 15:40:31.106928 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .PreCreateCheck
	I1013 15:40:31.107355 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetConfigRaw
	I1013 15:40:31.107850 1879347 main.go:141] libmachine: Creating machine...
	I1013 15:40:31.107867 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Create
	I1013 15:40:31.108004 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) creating domain...
	I1013 15:40:31.108043 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) creating network...
	I1013 15:40:31.109684 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found existing default network
	I1013 15:40:31.109927 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | <network connections='3'>
	I1013 15:40:31.109954 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <name>default</name>
	I1013 15:40:31.109967 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 15:40:31.109979 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <forward mode='nat'>
	I1013 15:40:31.110001 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <nat>
	I1013 15:40:31.110012 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <port start='1024' end='65535'/>
	I1013 15:40:31.110022 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </nat>
	I1013 15:40:31.110034 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </forward>
	I1013 15:40:31.110046 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 15:40:31.110067 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 15:40:31.110078 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 15:40:31.110086 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <dhcp>
	I1013 15:40:31.110101 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 15:40:31.110114 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </dhcp>
	I1013 15:40:31.110123 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </ip>
	I1013 15:40:31.110130 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | </network>
	I1013 15:40:31.110142 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | 
	I1013 15:40:31.110967 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:31.110790 1879376 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a1:e2:3d} reservation:<nil>}
	I1013 15:40:31.111781 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:31.111669 1879376 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002622d0}
	I1013 15:40:31.111842 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | defining private network:
	I1013 15:40:31.111863 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | 
	I1013 15:40:31.111872 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | <network>
	I1013 15:40:31.111879 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <name>mk-default-k8s-diff-port-426789</name>
	I1013 15:40:31.111887 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <dns enable='no'/>
	I1013 15:40:31.111893 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1013 15:40:31.111901 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <dhcp>
	I1013 15:40:31.111909 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1013 15:40:31.111916 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </dhcp>
	I1013 15:40:31.111923 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </ip>
	I1013 15:40:31.111930 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | </network>
	I1013 15:40:31.111936 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | 
	I1013 15:40:31.118484 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | creating private network mk-default-k8s-diff-port-426789 192.168.50.0/24...
	I1013 15:40:31.210527 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | private network mk-default-k8s-diff-port-426789 192.168.50.0/24 created
	I1013 15:40:31.210912 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | <network>
	I1013 15:40:31.210940 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <name>mk-default-k8s-diff-port-426789</name>
	I1013 15:40:31.210952 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting up store path in /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789 ...
	I1013 15:40:31.210975 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) building disk image from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 15:40:31.210990 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Downloading /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 15:40:31.211113 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <uuid>1a44efd4-f378-4374-a77b-9a1907787496</uuid>
	I1013 15:40:31.211151 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1013 15:40:31.211165 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <mac address='52:54:00:a9:ba:3b'/>
	I1013 15:40:31.211180 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <dns enable='no'/>
	I1013 15:40:31.211190 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1013 15:40:31.211200 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <dhcp>
	I1013 15:40:31.211215 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1013 15:40:31.211225 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </dhcp>
	I1013 15:40:31.211234 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </ip>
	I1013 15:40:31.211244 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | </network>
	I1013 15:40:31.211299 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | 
	I1013 15:40:31.211350 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:31.210865 1879376 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:40:31.576032 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:31.575840 1879376 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa...
	I1013 15:40:32.098435 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:32.098239 1879376 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/default-k8s-diff-port-426789.rawdisk...
	I1013 15:40:32.098486 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Writing magic tar header
	I1013 15:40:32.098508 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Writing SSH key tar header
	I1013 15:40:32.098536 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:32.098436 1879376 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789 ...
	I1013 15:40:32.098632 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789
	I1013 15:40:32.098657 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines
	I1013 15:40:32.098675 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789 (perms=drwx------)
	I1013 15:40:32.098726 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines (perms=drwxr-xr-x)
	I1013 15:40:32.098740 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube (perms=drwxr-xr-x)
	I1013 15:40:32.098855 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975 (perms=drwxrwxr-x)
	I1013 15:40:32.098882 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:40:32.098898 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975
	I1013 15:40:32.098913 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 15:40:32.098924 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 15:40:32.098933 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 15:40:32.098951 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) defining domain...
	I1013 15:40:32.099063 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins
	I1013 15:40:32.099079 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home
	I1013 15:40:32.099112 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | skipping /home - not owner
	I1013 15:40:32.100185 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) defining domain using XML: 
	I1013 15:40:32.100207 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) <domain type='kvm'>
	I1013 15:40:32.100219 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <name>default-k8s-diff-port-426789</name>
	I1013 15:40:32.100228 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <memory unit='MiB'>3072</memory>
	I1013 15:40:32.100240 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <vcpu>2</vcpu>
	I1013 15:40:32.100255 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <features>
	I1013 15:40:32.100265 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <acpi/>
	I1013 15:40:32.100274 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <apic/>
	I1013 15:40:32.100308 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <pae/>
	I1013 15:40:32.100393 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   </features>
	I1013 15:40:32.100414 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <cpu mode='host-passthrough'>
	I1013 15:40:32.100425 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   </cpu>
	I1013 15:40:32.100434 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <os>
	I1013 15:40:32.100441 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <type>hvm</type>
	I1013 15:40:32.100451 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <boot dev='cdrom'/>
	I1013 15:40:32.100457 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <boot dev='hd'/>
	I1013 15:40:32.100476 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <bootmenu enable='no'/>
	I1013 15:40:32.100501 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   </os>
	I1013 15:40:32.100514 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <devices>
	I1013 15:40:32.100524 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <disk type='file' device='cdrom'>
	I1013 15:40:32.100538 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/boot2docker.iso'/>
	I1013 15:40:32.100550 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <target dev='hdc' bus='scsi'/>
	I1013 15:40:32.100558 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <readonly/>
	I1013 15:40:32.100565 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </disk>
	I1013 15:40:32.100574 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <disk type='file' device='disk'>
	I1013 15:40:32.100584 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 15:40:32.100602 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/default-k8s-diff-port-426789.rawdisk'/>
	I1013 15:40:32.100612 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <target dev='hda' bus='virtio'/>
	I1013 15:40:32.100619 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </disk>
	I1013 15:40:32.100627 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <interface type='network'>
	I1013 15:40:32.100640 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <source network='mk-default-k8s-diff-port-426789'/>
	I1013 15:40:32.100648 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <model type='virtio'/>
	I1013 15:40:32.100656 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </interface>
	I1013 15:40:32.100663 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <interface type='network'>
	I1013 15:40:32.100687 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <source network='default'/>
	I1013 15:40:32.100694 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <model type='virtio'/>
	I1013 15:40:32.100702 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </interface>
	I1013 15:40:32.100709 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <serial type='pty'>
	I1013 15:40:32.100728 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <target port='0'/>
	I1013 15:40:32.100735 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </serial>
	I1013 15:40:32.100743 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <console type='pty'>
	I1013 15:40:32.100751 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <target type='serial' port='0'/>
	I1013 15:40:32.100758 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </console>
	I1013 15:40:32.100765 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <rng model='virtio'>
	I1013 15:40:32.100773 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <backend model='random'>/dev/random</backend>
	I1013 15:40:32.100780 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </rng>
	I1013 15:40:32.100787 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   </devices>
	I1013 15:40:32.100794 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) </domain>
	I1013 15:40:32.100804 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) 
	I1013 15:40:32.106463 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:68:6a:54 in network default
	I1013 15:40:32.107329 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) starting domain...
	I1013 15:40:32.107346 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) ensuring networks are active...
	I1013 15:40:32.107375 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:32.108459 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Ensuring network default is active
	I1013 15:40:32.109195 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Ensuring network mk-default-k8s-diff-port-426789 is active
	I1013 15:40:32.110092 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) getting domain XML...
	I1013 15:40:32.111257 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | starting domain XML:
	I1013 15:40:32.111288 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | <domain type='kvm'>
	I1013 15:40:32.111302 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <name>default-k8s-diff-port-426789</name>
	I1013 15:40:32.111318 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <uuid>4204e92c-5377-432a-9bb1-63d826e31270</uuid>
	I1013 15:40:32.111331 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <memory unit='KiB'>3145728</memory>
	I1013 15:40:32.111341 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1013 15:40:32.111356 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 15:40:32.111367 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <os>
	I1013 15:40:32.111378 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 15:40:32.111390 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <boot dev='cdrom'/>
	I1013 15:40:32.111424 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <boot dev='hd'/>
	I1013 15:40:32.111453 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <bootmenu enable='no'/>
	I1013 15:40:32.111464 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </os>
	I1013 15:40:32.111478 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <features>
	I1013 15:40:32.111490 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <acpi/>
	I1013 15:40:32.111516 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <apic/>
	I1013 15:40:32.111533 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <pae/>
	I1013 15:40:32.111541 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </features>
	I1013 15:40:32.111553 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 15:40:32.111567 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <clock offset='utc'/>
	I1013 15:40:32.111603 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 15:40:32.111709 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <on_reboot>restart</on_reboot>
	I1013 15:40:32.111744 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <on_crash>destroy</on_crash>
	I1013 15:40:32.111761 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <devices>
	I1013 15:40:32.111778 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 15:40:32.111791 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <disk type='file' device='cdrom'>
	I1013 15:40:32.111803 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <driver name='qemu' type='raw'/>
	I1013 15:40:32.111821 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/boot2docker.iso'/>
	I1013 15:40:32.111852 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 15:40:32.111878 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <readonly/>
	I1013 15:40:32.111895 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 15:40:32.111911 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </disk>
	I1013 15:40:32.111927 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <disk type='file' device='disk'>
	I1013 15:40:32.111946 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 15:40:32.111977 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/default-k8s-diff-port-426789.rawdisk'/>
	I1013 15:40:32.111990 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <target dev='hda' bus='virtio'/>
	I1013 15:40:32.112014 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 15:40:32.112028 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </disk>
	I1013 15:40:32.112039 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 15:40:32.112048 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 15:40:32.112057 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </controller>
	I1013 15:40:32.112065 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 15:40:32.112075 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 15:40:32.112088 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 15:40:32.112096 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </controller>
	I1013 15:40:32.112103 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <interface type='network'>
	I1013 15:40:32.112112 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <mac address='52:54:00:07:df:00'/>
	I1013 15:40:32.112119 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <source network='mk-default-k8s-diff-port-426789'/>
	I1013 15:40:32.112126 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <model type='virtio'/>
	I1013 15:40:32.112135 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 15:40:32.112144 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </interface>
	I1013 15:40:32.112155 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <interface type='network'>
	I1013 15:40:32.112166 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <mac address='52:54:00:68:6a:54'/>
	I1013 15:40:32.112181 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <source network='default'/>
	I1013 15:40:32.112192 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <model type='virtio'/>
	I1013 15:40:32.112205 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 15:40:32.112218 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </interface>
	I1013 15:40:32.112236 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <serial type='pty'>
	I1013 15:40:32.112249 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <target type='isa-serial' port='0'>
	I1013 15:40:32.112265 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |         <model name='isa-serial'/>
	I1013 15:40:32.112277 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       </target>
	I1013 15:40:32.112286 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </serial>
	I1013 15:40:32.112300 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <console type='pty'>
	I1013 15:40:32.112312 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <target type='serial' port='0'/>
	I1013 15:40:32.112322 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </console>
	I1013 15:40:32.112333 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <input type='mouse' bus='ps2'/>
	I1013 15:40:32.112344 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 15:40:32.112352 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <audio id='1' type='none'/>
	I1013 15:40:32.112366 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <memballoon model='virtio'>
	I1013 15:40:32.112383 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 15:40:32.112392 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </memballoon>
	I1013 15:40:32.112397 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <rng model='virtio'>
	I1013 15:40:32.112403 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <backend model='random'>/dev/random</backend>
	I1013 15:40:32.112415 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 15:40:32.112424 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </rng>
	I1013 15:40:32.112430 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </devices>
	I1013 15:40:32.112440 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | </domain>
	I1013 15:40:32.112452 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | 
	I1013 15:40:32.598676 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) waiting for domain to start...
	I1013 15:40:32.600623 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) domain is now running
	I1013 15:40:32.600652 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) waiting for IP...
	I1013 15:40:32.601752 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:32.602620 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:32.602651 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:32.603048 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:32.603140 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:32.603054 1879376 retry.go:31] will retry after 222.839819ms: waiting for domain to come up
	I1013 15:40:32.828034 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:32.828792 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:32.828821 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:32.829227 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:32.829250 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:32.829172 1879376 retry.go:31] will retry after 277.559406ms: waiting for domain to come up
	I1013 15:40:33.109037 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:33.109969 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:33.110006 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:33.110329 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:33.110359 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:33.110317 1879376 retry.go:31] will retry after 316.092535ms: waiting for domain to come up
	I1013 15:40:33.427954 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:33.428854 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:33.428884 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:33.429315 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:33.429344 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:33.429267 1879376 retry.go:31] will retry after 552.952396ms: waiting for domain to come up
	I1013 15:40:33.984083 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:33.984851 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:33.984879 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:33.985341 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:33.985417 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:33.985318 1879376 retry.go:31] will retry after 571.351202ms: waiting for domain to come up
	I1013 15:40:34.558025 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:34.558541 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:34.558568 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:34.558941 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:34.558970 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:34.558894 1879376 retry.go:31] will retry after 665.719599ms: waiting for domain to come up
	I1013 15:40:35.226260 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:35.226976 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:35.227011 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:35.227350 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:35.227378 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:35.227311 1879376 retry.go:31] will retry after 1.182674007s: waiting for domain to come up
	I1013 15:40:36.411792 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:36.412663 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:36.412689 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:36.413013 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:36.413071 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:36.412987 1879376 retry.go:31] will retry after 1.372038869s: waiting for domain to come up
	I1013 15:40:37.787107 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:37.787665 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:37.787687 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:37.788100 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:37.788129 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:37.788041 1879376 retry.go:31] will retry after 1.596227615s: waiting for domain to come up
	I1013 15:40:39.385884 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:39.386796 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:39.386844 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:39.387137 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:39.387170 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:39.387125 1879376 retry.go:31] will retry after 1.590524128s: waiting for domain to come up
	I1013 15:40:40.980098 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:40.981033 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:40.981095 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:40.981747 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:40.981786 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:40.981730 1879376 retry.go:31] will retry after 2.368318019s: waiting for domain to come up
	I1013 15:40:43.353084 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:43.353818 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:43.353851 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:43.354271 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:43.354315 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:43.354218 1879376 retry.go:31] will retry after 3.452503205s: waiting for domain to come up
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	e19bdab9211ab       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   ef84cd84287c6       dashboard-metrics-scraper-6ffb444bf9-fbbs2
	68b3fdbaad74b       6e38f40d628db       8 minutes ago       Running             storage-provisioner         2                   8fcf5a5038548       storage-provisioner
	fff2931577732       56cc512116c8f       9 minutes ago       Running             busybox                     1                   79c5a3c348fd5       busybox
	9be646d88f3f4       52546a367cc9e       9 minutes ago       Running             coredns                     1                   59f8d1b6eff12       coredns-66bc5c9577-vfqml
	c8d68c0b5b004       6e38f40d628db       9 minutes ago       Exited              storage-provisioner         1                   8fcf5a5038548       storage-provisioner
	8a31e63284253       fc25172553d79       9 minutes ago       Running             kube-proxy                  1                   fd61ae777eb69       kube-proxy-v8ndx
	5e5dd356ff2ec       c3994bc696102       9 minutes ago       Running             kube-apiserver              1                   dee68c3ae6b6a       kube-apiserver-no-preload-673307
	c10ad89ae3abb       5f1f5298c888d       9 minutes ago       Running             etcd                        1                   1c280b78a8f1b       etcd-no-preload-673307
	84ce421cd0a89       7dd6aaa1717ab       9 minutes ago       Running             kube-scheduler              1                   1c083b14c19d7       kube-scheduler-no-preload-673307
	2709cff04f5c8       c80c8dbafe7dd       9 minutes ago       Running             kube-controller-manager     1                   f202c9072274e       kube-controller-manager-no-preload-673307
	9484313d54631       56cc512116c8f       11 minutes ago      Exited              busybox                     0                   d6fcae4e1d4d5       busybox
	dca47b48c82d3       52546a367cc9e       11 minutes ago      Exited              coredns                     0                   7e0f99084df2e       coredns-66bc5c9577-vfqml
	22670bd9ab094       fc25172553d79       11 minutes ago      Exited              kube-proxy                  0                   a961c0d8c2594       kube-proxy-v8ndx
	c049868803b14       5f1f5298c888d       12 minutes ago      Exited              etcd                        0                   323ab2d53b64a       etcd-no-preload-673307
	97b7ebc7f552a       c3994bc696102       12 minutes ago      Exited              kube-apiserver              0                   092bd0f706ede       kube-apiserver-no-preload-673307
	668e85e990be5       7dd6aaa1717ab       12 minutes ago      Exited              kube-scheduler              0                   c1ff9a19d8382       kube-scheduler-no-preload-673307
	b87b6ea0d2c9d       c80c8dbafe7dd       12 minutes ago      Exited              kube-controller-manager     0                   4a5ffd2a57b04       kube-controller-manager-no-preload-673307
	
	
	==> containerd <==
	Oct 13 15:34:54 no-preload-673307 containerd[721]: time="2025-10-13T15:34:54.461397001Z" level=info msg="StartContainer for \"b2fd6ce00f034362a972a3c3a70a32bff6fad4b0a50e4c461387001897cb4a6d\""
	Oct 13 15:34:54 no-preload-673307 containerd[721]: time="2025-10-13T15:34:54.543122708Z" level=info msg="StartContainer for \"b2fd6ce00f034362a972a3c3a70a32bff6fad4b0a50e4c461387001897cb4a6d\" returns successfully"
	Oct 13 15:34:54 no-preload-673307 containerd[721]: time="2025-10-13T15:34:54.602634478Z" level=info msg="shim disconnected" id=b2fd6ce00f034362a972a3c3a70a32bff6fad4b0a50e4c461387001897cb4a6d namespace=k8s.io
	Oct 13 15:34:54 no-preload-673307 containerd[721]: time="2025-10-13T15:34:54.603196618Z" level=warning msg="cleaning up after shim disconnected" id=b2fd6ce00f034362a972a3c3a70a32bff6fad4b0a50e4c461387001897cb4a6d namespace=k8s.io
	Oct 13 15:34:54 no-preload-673307 containerd[721]: time="2025-10-13T15:34:54.603307731Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:34:55 no-preload-673307 containerd[721]: time="2025-10-13T15:34:55.612842977Z" level=info msg="RemoveContainer for \"a5d929911b3ed79fb6eb8f2541bf9a1617851911fe2e2fecfcf8156f551d611a\""
	Oct 13 15:34:55 no-preload-673307 containerd[721]: time="2025-10-13T15:34:55.623486711Z" level=info msg="RemoveContainer for \"a5d929911b3ed79fb6eb8f2541bf9a1617851911fe2e2fecfcf8156f551d611a\" returns successfully"
	Oct 13 15:37:08 no-preload-673307 containerd[721]: time="2025-10-13T15:37:08.427971502Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 13 15:37:08 no-preload-673307 containerd[721]: time="2025-10-13T15:37:08.432967787Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Oct 13 15:37:08 no-preload-673307 containerd[721]: time="2025-10-13T15:37:08.435274222Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Oct 13 15:37:08 no-preload-673307 containerd[721]: time="2025-10-13T15:37:08.435380600Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 13 15:37:41 no-preload-673307 containerd[721]: time="2025-10-13T15:37:41.428816120Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 15:37:41 no-preload-673307 containerd[721]: time="2025-10-13T15:37:41.432194633Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:37:41 no-preload-673307 containerd[721]: time="2025-10-13T15:37:41.499053230Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:37:41 no-preload-673307 containerd[721]: time="2025-10-13T15:37:41.616009541Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 15:37:41 no-preload-673307 containerd[721]: time="2025-10-13T15:37:41.616092177Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	Oct 13 15:37:45 no-preload-673307 containerd[721]: time="2025-10-13T15:37:45.432094052Z" level=info msg="CreateContainer within sandbox \"ef84cd84287c6ac9da0e101f12c902d29bd6a2a1bb40086aef634b428a605eb8\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Oct 13 15:37:45 no-preload-673307 containerd[721]: time="2025-10-13T15:37:45.463769340Z" level=info msg="CreateContainer within sandbox \"ef84cd84287c6ac9da0e101f12c902d29bd6a2a1bb40086aef634b428a605eb8\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64\""
	Oct 13 15:37:45 no-preload-673307 containerd[721]: time="2025-10-13T15:37:45.464543835Z" level=info msg="StartContainer for \"e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64\""
	Oct 13 15:37:45 no-preload-673307 containerd[721]: time="2025-10-13T15:37:45.535673344Z" level=info msg="StartContainer for \"e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64\" returns successfully"
	Oct 13 15:37:45 no-preload-673307 containerd[721]: time="2025-10-13T15:37:45.596085428Z" level=info msg="shim disconnected" id=e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64 namespace=k8s.io
	Oct 13 15:37:45 no-preload-673307 containerd[721]: time="2025-10-13T15:37:45.596136275Z" level=warning msg="cleaning up after shim disconnected" id=e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64 namespace=k8s.io
	Oct 13 15:37:45 no-preload-673307 containerd[721]: time="2025-10-13T15:37:45.596151261Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:37:46 no-preload-673307 containerd[721]: time="2025-10-13T15:37:46.197827371Z" level=info msg="RemoveContainer for \"b2fd6ce00f034362a972a3c3a70a32bff6fad4b0a50e4c461387001897cb4a6d\""
	Oct 13 15:37:46 no-preload-673307 containerd[721]: time="2025-10-13T15:37:46.206161597Z" level=info msg="RemoveContainer for \"b2fd6ce00f034362a972a3c3a70a32bff6fad4b0a50e4c461387001897cb4a6d\" returns successfully"
	
	
	==> coredns [9be646d88f3f4c3d43677b13d759061982b278cec62582a93069acfab88a81cf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50220 - 57105 "HINFO IN 2337932929109627341.7539543411957341480. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.444279123s
	
	
	==> coredns [dca47b48c82d372e8c111ef9f1b2fd5b34da6d82251035a0e5be07fa64b08493] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> describe nodes <==
	Name:               no-preload-673307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-673307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=no-preload-673307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T15_28_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 15:28:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-673307
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 15:40:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 15:40:33 +0000   Mon, 13 Oct 2025 15:28:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 15:40:33 +0000   Mon, 13 Oct 2025 15:28:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 15:40:33 +0000   Mon, 13 Oct 2025 15:28:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 15:40:33 +0000   Mon, 13 Oct 2025 15:31:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.180
	  Hostname:    no-preload-673307
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 1cfd5774df7841e686f57e78cc4438e8
	  System UUID:                1cfd5774-df78-41e6-86f5-7e78cc4438e8
	  Boot ID:                    d3f39062-cfb5-49bd-a190-ed26112d5333
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-vfqml                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     11m
	  kube-system                 etcd-no-preload-673307                        100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         11m
	  kube-system                 kube-apiserver-no-preload-673307              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-no-preload-673307     200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-v8ndx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-no-preload-673307              100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-746fcd58dc-fx4gj               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fbbs2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dqs5m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 9m14s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node no-preload-673307 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node no-preload-673307 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node no-preload-673307 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node no-preload-673307 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node no-preload-673307 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node no-preload-673307 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   NodeReady                11m                    kubelet          Node no-preload-673307 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node no-preload-673307 event: Registered Node no-preload-673307 in Controller
	  Normal   Starting                 9m23s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node no-preload-673307 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node no-preload-673307 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node no-preload-673307 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m18s                  kubelet          Node no-preload-673307 has been rebooted, boot id: d3f39062-cfb5-49bd-a190-ed26112d5333
	  Normal   RegisteredNode           9m14s                  node-controller  Node no-preload-673307 event: Registered Node no-preload-673307 in Controller
	
	
	==> dmesg <==
	[Oct13 15:31] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000065] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.009150] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.980005] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085500] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.113922] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.618932] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.968954] kauditd_printk_skb: 182 callbacks suppressed
	[  +1.461254] kauditd_printk_skb: 176 callbacks suppressed
	[  +2.929381] kauditd_printk_skb: 41 callbacks suppressed
	[Oct13 15:32] kauditd_printk_skb: 12 callbacks suppressed
	[  +9.981032] kauditd_printk_skb: 7 callbacks suppressed
	[ +23.015573] kauditd_printk_skb: 5 callbacks suppressed
	[Oct13 15:33] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:34] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:37] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [c049868803b144f173af37d69998803a723d7e4f596a759002565b5c8858fe03] <==
	{"level":"info","ts":"2025-10-13T15:29:01.834748Z","caller":"traceutil/trace.go:172","msg":"trace[79171425] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"559.004876ms","start":"2025-10-13T15:29:01.275723Z","end":"2025-10-13T15:29:01.834728Z","steps":["trace[79171425] 'process raft request'  (duration: 558.529057ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:01.835761Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:01.276333Z","time spent":"558.357537ms","remote":"127.0.0.1:58982","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/configmaps/kube-system/coredns\" mod_revision:272 > success:<request_put:<key:\"/registry/configmaps/kube-system/coredns\" value_size:782 >> failure:<request_range:<key:\"/registry/configmaps/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2025-10-13T15:29:01.834504Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"454.642097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-10-13T15:29:01.836550Z","caller":"traceutil/trace.go:172","msg":"trace[1279998901] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:380; }","duration":"456.775528ms","start":"2025-10-13T15:29:01.379760Z","end":"2025-10-13T15:29:01.836535Z","steps":["trace[1279998901] 'agreement among raft nodes before linearized reading'  (duration: 454.181313ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:01.836888Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:01.379741Z","time spent":"457.116792ms","remote":"127.0.0.1:58990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":374,"request content":"key:\"/registry/namespaces/kube-system\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T15:29:01.838465Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:01.275700Z","time spent":"559.33455ms","remote":"127.0.0.1:59150","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4954,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-no-preload-673307\" mod_revision:307 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-no-preload-673307\" value_size:4887 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-no-preload-673307\" > >"}
	{"level":"info","ts":"2025-10-13T15:29:01.934276Z","caller":"traceutil/trace.go:172","msg":"trace[1419914040] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"226.136565ms","start":"2025-10-13T15:29:01.708116Z","end":"2025-10-13T15:29:01.934252Z","steps":["trace[1419914040] 'process raft request'  (duration: 207.051274ms)","trace[1419914040] 'compare'  (duration: 16.946979ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T15:29:03.586185Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5960683286396370121,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-13T15:29:03.772615Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"505.564194ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:29:03.772688Z","caller":"traceutil/trace.go:172","msg":"trace[510750428] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:410; }","duration":"505.711186ms","start":"2025-10-13T15:29:03.266966Z","end":"2025-10-13T15:29:03.772677Z","steps":["trace[510750428] 'range keys from in-memory index tree'  (duration: 505.497011ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:03.773237Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"753.49044ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5960683286396370123 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-ccpqs.186e16972eedd8f0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-ccpqs.186e16972eedd8f0\" value_size:653 lease:5960683286396369334 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-13T15:29:03.773282Z","caller":"traceutil/trace.go:172","msg":"trace[1776426828] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"837.982533ms","start":"2025-10-13T15:29:02.935292Z","end":"2025-10-13T15:29:03.773274Z","steps":["trace[1776426828] 'process raft request'  (duration: 84.256489ms)","trace[1776426828] 'compare'  (duration: 753.356269ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T15:29:03.773308Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:02.935272Z","time spent":"838.026607ms","remote":"127.0.0.1:58922","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":741,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-ccpqs.186e16972eedd8f0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-ccpqs.186e16972eedd8f0\" value_size:653 lease:5960683286396369334 >> failure:<>"}
	{"level":"info","ts":"2025-10-13T15:29:03.774314Z","caller":"traceutil/trace.go:172","msg":"trace[677175873] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"831.26664ms","start":"2025-10-13T15:29:02.943037Z","end":"2025-10-13T15:29:03.774303Z","steps":["trace[677175873] 'process raft request'  (duration: 831.202332ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:03.774440Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:02.942955Z","time spent":"831.388369ms","remote":"127.0.0.1:53978","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:350 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"info","ts":"2025-10-13T15:29:03.774570Z","caller":"traceutil/trace.go:172","msg":"trace[455507024] linearizableReadLoop","detail":"{readStateIndex:427; appliedIndex:428; }","duration":"688.690046ms","start":"2025-10-13T15:29:03.085873Z","end":"2025-10-13T15:29:03.774563Z","steps":["trace[455507024] 'read index received'  (duration: 688.687953ms)","trace[455507024] 'applied index is now lower than readState.Index'  (duration: 1.716µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T15:29:03.774702Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"688.826508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:29:03.774768Z","caller":"traceutil/trace.go:172","msg":"trace[279751344] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:412; }","duration":"688.844964ms","start":"2025-10-13T15:29:03.085868Z","end":"2025-10-13T15:29:03.774713Z","steps":["trace[279751344] 'agreement among raft nodes before linearized reading'  (duration: 688.809579ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:03.774807Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:03.085843Z","time spent":"688.955768ms","remote":"127.0.0.1:59150","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-10-13T15:29:03.887111Z","caller":"traceutil/trace.go:172","msg":"trace[940418926] linearizableReadLoop","detail":"{readStateIndex:428; appliedIndex:428; }","duration":"112.470041ms","start":"2025-10-13T15:29:03.774584Z","end":"2025-10-13T15:29:03.887054Z","steps":["trace[940418926] 'read index received'  (duration: 112.461017ms)","trace[940418926] 'applied index is now lower than readState.Index'  (duration: 6.962µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T15:29:03.898867Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"537.149044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.180\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-10-13T15:29:03.899309Z","caller":"traceutil/trace.go:172","msg":"trace[2023319934] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"115.355704ms","start":"2025-10-13T15:29:03.783938Z","end":"2025-10-13T15:29:03.899294Z","steps":["trace[2023319934] 'process raft request'  (duration: 115.219782ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:29:03.899620Z","caller":"traceutil/trace.go:172","msg":"trace[718732609] range","detail":"{range_begin:/registry/masterleases/192.168.61.180; range_end:; response_count:1; response_revision:412; }","duration":"537.917585ms","start":"2025-10-13T15:29:03.361684Z","end":"2025-10-13T15:29:03.899602Z","steps":["trace[718732609] 'agreement among raft nodes before linearized reading'  (duration: 525.513518ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:03.900561Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:03.361666Z","time spent":"538.870774ms","remote":"127.0.0.1:58820","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":1,"response size":158,"request content":"key:\"/registry/masterleases/192.168.61.180\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T15:29:03.899353Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:03.308378Z","time spent":"590.970465ms","remote":"127.0.0.1:54082","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> etcd [c10ad89ae3abbf41e4927c24532eb50ca09ac34ecd038f6df274bdadd88c8715] <==
	{"level":"warn","ts":"2025-10-13T15:31:31.452101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:31.477568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:31.505536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:31.524159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:31.566499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:31.590054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:31.638859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:31.657588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:31.692078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:31.715561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:31.869045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:31:36.173002Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.082731ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.173149Z","caller":"traceutil/trace.go:172","msg":"trace[1214062527] range","detail":"{range_begin:/registry/services/endpoints; range_end:; response_count:0; response_revision:522; }","duration":"100.304017ms","start":"2025-10-13T15:31:36.072817Z","end":"2025-10-13T15:31:36.173121Z","steps":["trace[1214062527] 'agreement among raft nodes before linearized reading'  (duration: 99.846959ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:31:36.174251Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.440414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.174324Z","caller":"traceutil/trace.go:172","msg":"trace[797138086] range","detail":"{range_begin:/registry/volumeattachments; range_end:; response_count:0; response_revision:522; }","duration":"101.535038ms","start":"2025-10-13T15:31:36.072777Z","end":"2025-10-13T15:31:36.174312Z","steps":["trace[797138086] 'agreement among raft nodes before linearized reading'  (duration: 101.404683ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:31:36.217893Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.922512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.218559Z","caller":"traceutil/trace.go:172","msg":"trace[1386832768] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:522; }","duration":"145.788837ms","start":"2025-10-13T15:31:36.072747Z","end":"2025-10-13T15:31:36.218536Z","steps":["trace[1386832768] 'agreement among raft nodes before linearized reading'  (duration: 141.869992ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:31:36.222067Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.24118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.222152Z","caller":"traceutil/trace.go:172","msg":"trace[1480413118] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:522; }","duration":"149.43538ms","start":"2025-10-13T15:31:36.072700Z","end":"2025-10-13T15:31:36.222135Z","steps":["trace[1480413118] 'agreement among raft nodes before linearized reading'  (duration: 149.097701ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:31:36.222981Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.302884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.223049Z","caller":"traceutil/trace.go:172","msg":"trace[886982028] range","detail":"{range_begin:/registry/persistentvolumes; range_end:; response_count:0; response_revision:522; }","duration":"150.378922ms","start":"2025-10-13T15:31:36.072658Z","end":"2025-10-13T15:31:36.223037Z","steps":["trace[886982028] 'agreement among raft nodes before linearized reading'  (duration: 150.188657ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:31:36.232125Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"159.464911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattributesclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.232249Z","caller":"traceutil/trace.go:172","msg":"trace[172312641] range","detail":"{range_begin:/registry/volumeattributesclasses; range_end:; response_count:0; response_revision:522; }","duration":"159.60561ms","start":"2025-10-13T15:31:36.072628Z","end":"2025-10-13T15:31:36.232233Z","steps":["trace[172312641] 'agreement among raft nodes before linearized reading'  (duration: 159.396803ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:31:36.233106Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.487344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.233195Z","caller":"traceutil/trace.go:172","msg":"trace[639761388] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:522; }","duration":"160.587763ms","start":"2025-10-13T15:31:36.072595Z","end":"2025-10-13T15:31:36.233182Z","steps":["trace[639761388] 'agreement among raft nodes before linearized reading'  (duration: 160.443444ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:40:50 up 9 min,  0 users,  load average: 0.18, 0.20, 0.11
	Linux no-preload-673307 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5e5dd356ff2ecd3b2a79371993d9db06b0e5f407812a48ae0510dcfaee7b770c] <==
	E1013 15:36:33.763156       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:36:33.763177       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1013 15:36:33.763205       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:36:33.764603       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:37:33.763491       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:37:33.763569       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:37:33.763586       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:37:33.765063       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:37:33.765299       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:37:33.765502       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:39:33.764699       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:39:33.764790       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:39:33.764811       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:39:33.765972       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:39:33.766066       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:39:33.766136       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [97b7ebc7f552a892033fd37731d8cf1db86ef835db80eaa5072d77e823d5ab0f] <==
	I1013 15:28:54.691729       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 15:28:54.785957       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 15:28:59.799591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 15:28:59.807615       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 15:29:00.187888       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 15:29:00.346173       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1013 15:29:24.416292       1 conn.go:339] Error on socket receive: read tcp 192.168.61.180:8443->192.168.61.1:55516: use of closed network connection
	I1013 15:29:25.311417       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1013 15:29:25.325268       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:29:25.325388       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1013 15:29:25.325441       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1013 15:29:25.525877       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.108.75.252"}
	W1013 15:29:25.536965       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:29:25.537140       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1013 15:29:25.557586       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:29:25.557648       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [2709cff04f5c8f3d8e031b98918282f280278e2f018fa8f081540af0ea415234] <==
	I1013 15:34:36.598704       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:35:06.536011       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:35:06.612875       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:35:36.544262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:35:36.623548       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:36:06.551071       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:36:06.634103       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:36:36.560918       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:36:36.644282       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:37:06.567941       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:37:06.656828       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:37:36.573284       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:37:36.667617       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:38:06.580323       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:38:06.677557       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:38:36.587926       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:38:36.689040       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:39:06.594316       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:39:06.707524       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:39:36.600533       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:39:36.718103       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:40:06.607196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:40:06.731410       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:40:36.615991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:40:36.743985       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [b87b6ea0d2c9dabb66c3ff7cdf95b3b641c6ba1e5e14525c946773448e23f04e] <==
	I1013 15:28:59.297425       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 15:28:59.297768       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 15:28:59.298244       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 15:28:59.298343       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 15:28:59.297066       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 15:28:59.301372       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 15:28:59.301709       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 15:28:59.302565       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 15:28:59.302912       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 15:28:59.307099       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 15:28:59.311160       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 15:28:59.322145       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 15:28:59.322270       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 15:28:59.333689       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 15:28:59.342624       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 15:28:59.343351       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-673307" podCIDRs=["10.244.0.0/24"]
	I1013 15:28:59.343444       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 15:28:59.344091       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 15:28:59.346516       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 15:28:59.352566       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 15:28:59.352617       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 15:28:59.352628       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 15:28:59.353806       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 15:28:59.356163       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 15:28:59.358439       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [22670bd9ab09463fa3b05acf2e24db7346873b154520857894403e5e1ac9a3a4] <==
	I1013 15:29:03.121239       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 15:29:03.221763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 15:29:03.221805       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.180"]
	E1013 15:29:03.222279       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 15:29:03.275273       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 15:29:03.275429       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:29:03.275747       1 server_linux.go:132] "Using iptables Proxier"
	I1013 15:29:03.289415       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 15:29:03.290209       1 server.go:527] "Version info" version="v1.34.1"
	I1013 15:29:03.290543       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:29:03.298295       1 config.go:200] "Starting service config controller"
	I1013 15:29:03.298542       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 15:29:03.298793       1 config.go:106] "Starting endpoint slice config controller"
	I1013 15:29:03.298903       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 15:29:03.299121       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 15:29:03.299187       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 15:29:03.304907       1 config.go:309] "Starting node config controller"
	I1013 15:29:03.305047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 15:29:03.305066       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 15:29:03.399273       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 15:29:03.399292       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 15:29:03.399363       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [8a31e632842532c09058356081dc694b6fb32c7e6b806531b0c23108c2db8d89] <==
	I1013 15:31:34.822244       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 15:31:34.923372       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 15:31:34.923780       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.180"]
	E1013 15:31:34.924664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 15:31:35.145588       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 15:31:35.145667       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:31:35.145778       1 server_linux.go:132] "Using iptables Proxier"
	I1013 15:31:35.225337       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 15:31:35.253256       1 server.go:527] "Version info" version="v1.34.1"
	I1013 15:31:35.253508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:31:35.273654       1 config.go:200] "Starting service config controller"
	I1013 15:31:35.275004       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 15:31:35.284661       1 config.go:309] "Starting node config controller"
	I1013 15:31:35.284707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 15:31:35.279399       1 config.go:106] "Starting endpoint slice config controller"
	I1013 15:31:35.326102       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 15:31:35.279503       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 15:31:35.326391       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 15:31:35.384774       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 15:31:35.384825       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 15:31:35.427229       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 15:31:35.427288       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [668e85e990be5774163a840d95ba68ca46711c333066747ce7afa9a54793856a] <==
	E1013 15:28:51.339212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 15:28:51.339289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 15:28:51.339350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 15:28:51.339443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 15:28:51.339503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 15:28:52.162695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 15:28:52.184695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 15:28:52.185257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 15:28:52.254582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 15:28:52.259823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 15:28:52.284962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 15:28:52.311912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 15:28:52.351124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 15:28:52.367698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 15:28:52.436750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 15:28:52.462783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 15:28:52.562081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 15:28:52.613496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 15:28:52.689211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 15:28:52.694028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 15:28:52.766331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 15:28:52.778954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 15:28:52.851270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 15:28:52.908402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1013 15:28:55.011073       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [84ce421cd0a89e02fbf410505ac618d70b2aa42fb66f07c012f81c834b95733e] <==
	I1013 15:31:30.084311       1 serving.go:386] Generated self-signed cert in-memory
	W1013 15:31:32.649506       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 15:31:32.649839       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 15:31:32.649876       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 15:31:32.649883       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 15:31:32.741472       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 15:31:32.741517       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:31:32.751734       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:31:32.751987       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:31:32.755514       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 15:31:32.755968       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 15:31:32.853316       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 15:39:31 no-preload-673307 kubelet[1041]: E1013 15:39:31.427359    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:39:35 no-preload-673307 kubelet[1041]: I1013 15:39:35.426606    1041 scope.go:117] "RemoveContainer" containerID="e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64"
	Oct 13 15:39:35 no-preload-673307 kubelet[1041]: E1013 15:39:35.426781    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:39:41 no-preload-673307 kubelet[1041]: E1013 15:39:41.428058    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:39:42 no-preload-673307 kubelet[1041]: E1013 15:39:42.428898    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:39:50 no-preload-673307 kubelet[1041]: I1013 15:39:50.426907    1041 scope.go:117] "RemoveContainer" containerID="e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64"
	Oct 13 15:39:50 no-preload-673307 kubelet[1041]: E1013 15:39:50.427142    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:39:53 no-preload-673307 kubelet[1041]: E1013 15:39:53.431948    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:39:54 no-preload-673307 kubelet[1041]: E1013 15:39:54.428555    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:40:02 no-preload-673307 kubelet[1041]: I1013 15:40:02.426738    1041 scope.go:117] "RemoveContainer" containerID="e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64"
	Oct 13 15:40:02 no-preload-673307 kubelet[1041]: E1013 15:40:02.427057    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:40:07 no-preload-673307 kubelet[1041]: E1013 15:40:07.428741    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:40:07 no-preload-673307 kubelet[1041]: E1013 15:40:07.429057    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:40:17 no-preload-673307 kubelet[1041]: I1013 15:40:17.426658    1041 scope.go:117] "RemoveContainer" containerID="e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64"
	Oct 13 15:40:17 no-preload-673307 kubelet[1041]: E1013 15:40:17.426871    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:40:18 no-preload-673307 kubelet[1041]: E1013 15:40:18.428713    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:40:22 no-preload-673307 kubelet[1041]: E1013 15:40:22.429358    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:40:29 no-preload-673307 kubelet[1041]: I1013 15:40:29.427355    1041 scope.go:117] "RemoveContainer" containerID="e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64"
	Oct 13 15:40:29 no-preload-673307 kubelet[1041]: E1013 15:40:29.427970    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:40:29 no-preload-673307 kubelet[1041]: E1013 15:40:29.430737    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:40:36 no-preload-673307 kubelet[1041]: E1013 15:40:36.428683    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:40:42 no-preload-673307 kubelet[1041]: I1013 15:40:42.426899    1041 scope.go:117] "RemoveContainer" containerID="e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64"
	Oct 13 15:40:42 no-preload-673307 kubelet[1041]: E1013 15:40:42.427350    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:40:43 no-preload-673307 kubelet[1041]: E1013 15:40:43.433075    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:40:50 no-preload-673307 kubelet[1041]: E1013 15:40:50.428320    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	
	
	==> storage-provisioner [68b3fdbaad74bfc96f73bc11bd3d91ea38819384d2ba896d82b799b59960cf1d] <==
	W1013 15:40:25.172361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:27.177615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:27.184991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:29.190862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:29.203648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:31.207737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:31.213832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:33.219334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:33.226397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:35.232638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:35.242892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:37.249069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:37.259169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:39.263612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:39.276241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:41.280559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:41.286624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:43.292162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:43.304971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:45.308854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:45.316249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:47.319993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:47.329791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:49.334692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:49.342958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c8d68c0b5b0042ec3af32daf76852a75c8bbac2763603d8dce81657460ae9288] <==
	I1013 15:31:34.417706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 15:32:04.427642       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673307 -n no-preload-673307
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-673307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-fx4gj kubernetes-dashboard-855c9754f9-dqs5m
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-673307 describe pod metrics-server-746fcd58dc-fx4gj kubernetes-dashboard-855c9754f9-dqs5m
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-673307 describe pod metrics-server-746fcd58dc-fx4gj kubernetes-dashboard-855c9754f9-dqs5m: exit status 1 (80.530566ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-fx4gj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dqs5m" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-673307 describe pod metrics-server-746fcd58dc-fx4gj kubernetes-dashboard-855c9754f9-dqs5m: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v4zfv" [424f9607-da65-4bb7-be75-cf1ef1421095] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1013 15:32:20.488491 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:20.513972 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:22.032426 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:23.050061 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:28.172391 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:29.190881 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:38.413980 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:42.513935 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:58.896001 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:32:59.133665 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:01.799840 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:01.806400 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:01.817847 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:01.839321 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:01.880791 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:01.962375 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:02.123973 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:02.446096 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:03.087891 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:04.369729 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:06.931991 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:12.053973 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:22.296277 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:23.476035 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:39.857757 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:42.778063 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:33:51.112980 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:34:14.883059 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:34:23.740239 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:34:42.586104 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:34:45.398211 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:35:01.779901 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:35:06.218016 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:35:15.273033 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:35:42.975663 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:35:45.662295 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:36:07.248697 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:36:34.954414 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:37:01.537161 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:37:17.918403 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:37:20.513836 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:37:29.239899 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:37:45.621625 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:38:01.799952 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:38:29.504401 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:39:14.883011 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:40:06.217100 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:40:15.272537 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-516717 -n embed-certs-516717
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-10-13 15:41:20.76860961 +0000 UTC m=+6371.709167994
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-516717 describe po kubernetes-dashboard-855c9754f9-v4zfv -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-516717 describe po kubernetes-dashboard-855c9754f9-v4zfv -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-v4zfv
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-516717/192.168.72.104
Start Time:       Mon, 13 Oct 2025 15:32:13 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ndtp2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-ndtp2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  9m9s                   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         9m6s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv to embed-certs-516717
Warning  Failed            7m38s (x4 over 9m6s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling           6m8s (x5 over 9m6s)    kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            6m8s (x5 over 9m6s)    kubelet            Error: ErrImagePull
Warning  Failed            6m8s                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff           3m53s (x22 over 9m5s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            3m53s (x22 over 9m5s)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-516717 logs kubernetes-dashboard-855c9754f9-v4zfv -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-516717 logs kubernetes-dashboard-855c9754f9-v4zfv -n kubernetes-dashboard: exit status 1 (105.943663ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-v4zfv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-516717 logs kubernetes-dashboard-855c9754f9-v4zfv -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-516717 -n embed-certs-516717
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-516717 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-516717 logs -n 25: (1.873325325s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                   ARGS                                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-045564 sudo systemctl cat kubelet --no-pager                                                                                                                                                   │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                   │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                   │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl status docker --all --full --no-pager                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl cat docker --no-pager                                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/docker/daemon.json                                                                                                                                                        │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo docker system info                                                                                                                                                                 │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl cat cri-docker --no-pager                                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                           │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                     │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cri-dockerd --version                                                                                                                                                              │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl status containerd --all --full --no-pager                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl cat containerd --no-pager                                                                                                                                                │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /lib/systemd/system/containerd.service                                                                                                                                         │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo cat /etc/containerd/config.toml                                                                                                                                                    │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo containerd config dump                                                                                                                                                             │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo systemctl status crio --all --full --no-pager                                                                                                                                      │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	│ ssh     │ -p calico-045564 sudo systemctl cat crio --no-pager                                                                                                                                                      │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                            │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo crio config                                                                                                                                                                        │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ delete  │ -p calico-045564                                                                                                                                                                                         │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ delete  │ -p disable-driver-mounts-917680                                                                                                                                                                          │ disable-driver-mounts-917680 │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ start   │ -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1 │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 15:40:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 15:40:30.985466 1879347 out.go:360] Setting OutFile to fd 1 ...
	I1013 15:40:30.985793 1879347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:40:30.985805 1879347 out.go:374] Setting ErrFile to fd 2...
	I1013 15:40:30.985809 1879347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:40:30.986023 1879347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 15:40:30.986587 1879347 out.go:368] Setting JSON to false
	I1013 15:40:30.987896 1879347 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":26579,"bootTime":1760343452,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 15:40:30.988008 1879347 start.go:141] virtualization: kvm guest
	I1013 15:40:30.990315 1879347 out.go:179] * [default-k8s-diff-port-426789] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 15:40:30.991995 1879347 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 15:40:30.992017 1879347 notify.go:220] Checking for updates...
	I1013 15:40:30.995009 1879347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 15:40:30.996863 1879347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:40:30.998430 1879347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:40:30.999970 1879347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 15:40:31.001304 1879347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 15:40:31.003293 1879347 config.go:182] Loaded profile config "embed-certs-516717": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:40:31.003416 1879347 config.go:182] Loaded profile config "no-preload-673307": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:40:31.003518 1879347 config.go:182] Loaded profile config "old-k8s-version-316150": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1013 15:40:31.003630 1879347 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 15:40:31.043746 1879347 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 15:40:31.045311 1879347 start.go:305] selected driver: kvm2
	I1013 15:40:31.045342 1879347 start.go:925] validating driver "kvm2" against <nil>
	I1013 15:40:31.045361 1879347 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 15:40:31.046187 1879347 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:40:31.046323 1879347 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:40:31.063606 1879347 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:40:31.063642 1879347 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:40:31.081742 1879347 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:40:31.081796 1879347 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 15:40:31.082134 1879347 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 15:40:31.082165 1879347 cni.go:84] Creating CNI manager for ""
	I1013 15:40:31.082248 1879347 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:40:31.082260 1879347 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 15:40:31.082309 1879347 start.go:349] cluster config:
	{Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:40:31.082398 1879347 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:40:31.084383 1879347 out.go:179] * Starting "default-k8s-diff-port-426789" primary control-plane node in "default-k8s-diff-port-426789" cluster
	I1013 15:40:31.085994 1879347 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:40:31.086060 1879347 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 15:40:31.086072 1879347 cache.go:58] Caching tarball of preloaded images
	I1013 15:40:31.086202 1879347 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 15:40:31.086218 1879347 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 15:40:31.086350 1879347 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/config.json ...
	I1013 15:40:31.086378 1879347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/config.json: {Name:mk3ce3e9d016d5e915bf4b40059397909c76db20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:40:31.086576 1879347 start.go:360] acquireMachinesLock for default-k8s-diff-port-426789: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 15:40:31.086627 1879347 start.go:364] duration metric: took 30.495µs to acquireMachinesLock for "default-k8s-diff-port-426789"
	I1013 15:40:31.086657 1879347 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:40:31.086772 1879347 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 15:40:31.088669 1879347 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1013 15:40:31.088891 1879347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:40:31.088947 1879347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:40:31.104190 1879347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I1013 15:40:31.104771 1879347 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:40:31.105336 1879347 main.go:141] libmachine: Using API Version  1
	I1013 15:40:31.105364 1879347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:40:31.105824 1879347 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:40:31.106142 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:40:31.106356 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:40:31.106567 1879347 start.go:159] libmachine.API.Create for "default-k8s-diff-port-426789" (driver="kvm2")
	I1013 15:40:31.106603 1879347 client.go:168] LocalClient.Create starting
	I1013 15:40:31.106653 1879347 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem
	I1013 15:40:31.106700 1879347 main.go:141] libmachine: Decoding PEM data...
	I1013 15:40:31.106743 1879347 main.go:141] libmachine: Parsing certificate...
	I1013 15:40:31.106828 1879347 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem
	I1013 15:40:31.106855 1879347 main.go:141] libmachine: Decoding PEM data...
	I1013 15:40:31.106876 1879347 main.go:141] libmachine: Parsing certificate...
	I1013 15:40:31.106902 1879347 main.go:141] libmachine: Running pre-create checks...
	I1013 15:40:31.106928 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .PreCreateCheck
	I1013 15:40:31.107355 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetConfigRaw
	I1013 15:40:31.107850 1879347 main.go:141] libmachine: Creating machine...
	I1013 15:40:31.107867 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Create
	I1013 15:40:31.108004 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) creating domain...
	I1013 15:40:31.108043 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) creating network...
	I1013 15:40:31.109684 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found existing default network
	I1013 15:40:31.109927 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | <network connections='3'>
	I1013 15:40:31.109954 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <name>default</name>
	I1013 15:40:31.109967 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 15:40:31.109979 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <forward mode='nat'>
	I1013 15:40:31.110001 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <nat>
	I1013 15:40:31.110012 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <port start='1024' end='65535'/>
	I1013 15:40:31.110022 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </nat>
	I1013 15:40:31.110034 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </forward>
	I1013 15:40:31.110046 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 15:40:31.110067 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 15:40:31.110078 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 15:40:31.110086 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <dhcp>
	I1013 15:40:31.110101 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 15:40:31.110114 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </dhcp>
	I1013 15:40:31.110123 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </ip>
	I1013 15:40:31.110130 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | </network>
	I1013 15:40:31.110142 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | 
	I1013 15:40:31.110967 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:31.110790 1879376 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a1:e2:3d} reservation:<nil>}
	I1013 15:40:31.111781 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:31.111669 1879376 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002622d0}
	I1013 15:40:31.111842 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | defining private network:
	I1013 15:40:31.111863 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | 
	I1013 15:40:31.111872 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | <network>
	I1013 15:40:31.111879 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <name>mk-default-k8s-diff-port-426789</name>
	I1013 15:40:31.111887 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <dns enable='no'/>
	I1013 15:40:31.111893 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1013 15:40:31.111901 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <dhcp>
	I1013 15:40:31.111909 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1013 15:40:31.111916 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </dhcp>
	I1013 15:40:31.111923 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </ip>
	I1013 15:40:31.111930 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | </network>
	I1013 15:40:31.111936 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | 
	I1013 15:40:31.118484 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | creating private network mk-default-k8s-diff-port-426789 192.168.50.0/24...
	I1013 15:40:31.210527 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | private network mk-default-k8s-diff-port-426789 192.168.50.0/24 created
	I1013 15:40:31.210912 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | <network>
	I1013 15:40:31.210940 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <name>mk-default-k8s-diff-port-426789</name>
	I1013 15:40:31.210952 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting up store path in /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789 ...
	I1013 15:40:31.210975 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) building disk image from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 15:40:31.210990 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Downloading /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 15:40:31.211113 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <uuid>1a44efd4-f378-4374-a77b-9a1907787496</uuid>
	I1013 15:40:31.211151 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1013 15:40:31.211165 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <mac address='52:54:00:a9:ba:3b'/>
	I1013 15:40:31.211180 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <dns enable='no'/>
	I1013 15:40:31.211190 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1013 15:40:31.211200 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <dhcp>
	I1013 15:40:31.211215 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1013 15:40:31.211225 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </dhcp>
	I1013 15:40:31.211234 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </ip>
	I1013 15:40:31.211244 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | </network>
	I1013 15:40:31.211299 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | 
	I1013 15:40:31.211350 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:31.210865 1879376 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:40:31.576032 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:31.575840 1879376 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa...
	I1013 15:40:32.098435 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:32.098239 1879376 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/default-k8s-diff-port-426789.rawdisk...
	I1013 15:40:32.098486 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Writing magic tar header
	I1013 15:40:32.098508 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Writing SSH key tar header
	I1013 15:40:32.098536 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:32.098436 1879376 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789 ...
	I1013 15:40:32.098632 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789
	I1013 15:40:32.098657 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines
	I1013 15:40:32.098675 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789 (perms=drwx------)
	I1013 15:40:32.098726 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube/machines (perms=drwxr-xr-x)
	I1013 15:40:32.098740 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975/.minikube (perms=drwxr-xr-x)
	I1013 15:40:32.098855 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins/minikube-integration/21724-1810975 (perms=drwxrwxr-x)
	I1013 15:40:32.098882 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:40:32.098898 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-1810975
	I1013 15:40:32.098913 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 15:40:32.098924 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 15:40:32.098933 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 15:40:32.098951 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) defining domain...
	I1013 15:40:32.099063 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home/jenkins
	I1013 15:40:32.099079 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | checking permissions on dir: /home
	I1013 15:40:32.099112 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | skipping /home - not owner
	I1013 15:40:32.100185 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) defining domain using XML: 
	I1013 15:40:32.100207 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) <domain type='kvm'>
	I1013 15:40:32.100219 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <name>default-k8s-diff-port-426789</name>
	I1013 15:40:32.100228 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <memory unit='MiB'>3072</memory>
	I1013 15:40:32.100240 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <vcpu>2</vcpu>
	I1013 15:40:32.100255 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <features>
	I1013 15:40:32.100265 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <acpi/>
	I1013 15:40:32.100274 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <apic/>
	I1013 15:40:32.100308 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <pae/>
	I1013 15:40:32.100393 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   </features>
	I1013 15:40:32.100414 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <cpu mode='host-passthrough'>
	I1013 15:40:32.100425 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   </cpu>
	I1013 15:40:32.100434 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <os>
	I1013 15:40:32.100441 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <type>hvm</type>
	I1013 15:40:32.100451 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <boot dev='cdrom'/>
	I1013 15:40:32.100457 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <boot dev='hd'/>
	I1013 15:40:32.100476 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <bootmenu enable='no'/>
	I1013 15:40:32.100501 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   </os>
	I1013 15:40:32.100514 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   <devices>
	I1013 15:40:32.100524 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <disk type='file' device='cdrom'>
	I1013 15:40:32.100538 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/boot2docker.iso'/>
	I1013 15:40:32.100550 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <target dev='hdc' bus='scsi'/>
	I1013 15:40:32.100558 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <readonly/>
	I1013 15:40:32.100565 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </disk>
	I1013 15:40:32.100574 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <disk type='file' device='disk'>
	I1013 15:40:32.100584 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 15:40:32.100602 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/default-k8s-diff-port-426789.rawdisk'/>
	I1013 15:40:32.100612 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <target dev='hda' bus='virtio'/>
	I1013 15:40:32.100619 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </disk>
	I1013 15:40:32.100627 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <interface type='network'>
	I1013 15:40:32.100640 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <source network='mk-default-k8s-diff-port-426789'/>
	I1013 15:40:32.100648 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <model type='virtio'/>
	I1013 15:40:32.100656 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </interface>
	I1013 15:40:32.100663 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <interface type='network'>
	I1013 15:40:32.100687 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <source network='default'/>
	I1013 15:40:32.100694 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <model type='virtio'/>
	I1013 15:40:32.100702 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </interface>
	I1013 15:40:32.100709 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <serial type='pty'>
	I1013 15:40:32.100728 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <target port='0'/>
	I1013 15:40:32.100735 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </serial>
	I1013 15:40:32.100743 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <console type='pty'>
	I1013 15:40:32.100751 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <target type='serial' port='0'/>
	I1013 15:40:32.100758 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </console>
	I1013 15:40:32.100765 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     <rng model='virtio'>
	I1013 15:40:32.100773 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)       <backend model='random'>/dev/random</backend>
	I1013 15:40:32.100780 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)     </rng>
	I1013 15:40:32.100787 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789)   </devices>
	I1013 15:40:32.100794 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) </domain>
	I1013 15:40:32.100804 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) 
	I1013 15:40:32.106463 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:68:6a:54 in network default
	I1013 15:40:32.107329 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) starting domain...
	I1013 15:40:32.107346 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) ensuring networks are active...
	I1013 15:40:32.107375 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:32.108459 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Ensuring network default is active
	I1013 15:40:32.109195 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Ensuring network mk-default-k8s-diff-port-426789 is active
	I1013 15:40:32.110092 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) getting domain XML...
	I1013 15:40:32.111257 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | starting domain XML:
	I1013 15:40:32.111288 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | <domain type='kvm'>
	I1013 15:40:32.111302 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <name>default-k8s-diff-port-426789</name>
	I1013 15:40:32.111318 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <uuid>4204e92c-5377-432a-9bb1-63d826e31270</uuid>
	I1013 15:40:32.111331 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <memory unit='KiB'>3145728</memory>
	I1013 15:40:32.111341 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1013 15:40:32.111356 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 15:40:32.111367 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <os>
	I1013 15:40:32.111378 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 15:40:32.111390 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <boot dev='cdrom'/>
	I1013 15:40:32.111424 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <boot dev='hd'/>
	I1013 15:40:32.111453 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <bootmenu enable='no'/>
	I1013 15:40:32.111464 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </os>
	I1013 15:40:32.111478 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <features>
	I1013 15:40:32.111490 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <acpi/>
	I1013 15:40:32.111516 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <apic/>
	I1013 15:40:32.111533 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <pae/>
	I1013 15:40:32.111541 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </features>
	I1013 15:40:32.111553 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 15:40:32.111567 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <clock offset='utc'/>
	I1013 15:40:32.111603 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 15:40:32.111709 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <on_reboot>restart</on_reboot>
	I1013 15:40:32.111744 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <on_crash>destroy</on_crash>
	I1013 15:40:32.111761 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   <devices>
	I1013 15:40:32.111778 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 15:40:32.111791 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <disk type='file' device='cdrom'>
	I1013 15:40:32.111803 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <driver name='qemu' type='raw'/>
	I1013 15:40:32.111821 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/boot2docker.iso'/>
	I1013 15:40:32.111852 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 15:40:32.111878 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <readonly/>
	I1013 15:40:32.111895 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 15:40:32.111911 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </disk>
	I1013 15:40:32.111927 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <disk type='file' device='disk'>
	I1013 15:40:32.111946 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 15:40:32.111977 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/default-k8s-diff-port-426789.rawdisk'/>
	I1013 15:40:32.111990 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <target dev='hda' bus='virtio'/>
	I1013 15:40:32.112014 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 15:40:32.112028 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </disk>
	I1013 15:40:32.112039 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 15:40:32.112048 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 15:40:32.112057 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </controller>
	I1013 15:40:32.112065 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 15:40:32.112075 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 15:40:32.112088 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 15:40:32.112096 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </controller>
	I1013 15:40:32.112103 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <interface type='network'>
	I1013 15:40:32.112112 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <mac address='52:54:00:07:df:00'/>
	I1013 15:40:32.112119 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <source network='mk-default-k8s-diff-port-426789'/>
	I1013 15:40:32.112126 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <model type='virtio'/>
	I1013 15:40:32.112135 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 15:40:32.112144 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </interface>
	I1013 15:40:32.112155 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <interface type='network'>
	I1013 15:40:32.112166 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <mac address='52:54:00:68:6a:54'/>
	I1013 15:40:32.112181 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <source network='default'/>
	I1013 15:40:32.112192 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <model type='virtio'/>
	I1013 15:40:32.112205 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 15:40:32.112218 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </interface>
	I1013 15:40:32.112236 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <serial type='pty'>
	I1013 15:40:32.112249 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <target type='isa-serial' port='0'>
	I1013 15:40:32.112265 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |         <model name='isa-serial'/>
	I1013 15:40:32.112277 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       </target>
	I1013 15:40:32.112286 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </serial>
	I1013 15:40:32.112300 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <console type='pty'>
	I1013 15:40:32.112312 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <target type='serial' port='0'/>
	I1013 15:40:32.112322 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </console>
	I1013 15:40:32.112333 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <input type='mouse' bus='ps2'/>
	I1013 15:40:32.112344 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 15:40:32.112352 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <audio id='1' type='none'/>
	I1013 15:40:32.112366 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <memballoon model='virtio'>
	I1013 15:40:32.112383 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 15:40:32.112392 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </memballoon>
	I1013 15:40:32.112397 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     <rng model='virtio'>
	I1013 15:40:32.112403 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <backend model='random'>/dev/random</backend>
	I1013 15:40:32.112415 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 15:40:32.112424 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |     </rng>
	I1013 15:40:32.112430 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG |   </devices>
	I1013 15:40:32.112440 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | </domain>
	I1013 15:40:32.112452 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | 
	I1013 15:40:32.598676 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) waiting for domain to start...
	I1013 15:40:32.600623 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) domain is now running
	I1013 15:40:32.600652 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) waiting for IP...
	I1013 15:40:32.601752 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:32.602620 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:32.602651 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:32.603048 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:32.603140 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:32.603054 1879376 retry.go:31] will retry after 222.839819ms: waiting for domain to come up
	I1013 15:40:32.828034 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:32.828792 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:32.828821 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:32.829227 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:32.829250 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:32.829172 1879376 retry.go:31] will retry after 277.559406ms: waiting for domain to come up
	I1013 15:40:33.109037 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:33.109969 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:33.110006 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:33.110329 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:33.110359 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:33.110317 1879376 retry.go:31] will retry after 316.092535ms: waiting for domain to come up
	I1013 15:40:33.427954 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:33.428854 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:33.428884 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:33.429315 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:33.429344 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:33.429267 1879376 retry.go:31] will retry after 552.952396ms: waiting for domain to come up
	I1013 15:40:33.984083 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:33.984851 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:33.984879 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:33.985341 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:33.985417 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:33.985318 1879376 retry.go:31] will retry after 571.351202ms: waiting for domain to come up
	I1013 15:40:34.558025 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:34.558541 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:34.558568 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:34.558941 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:34.558970 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:34.558894 1879376 retry.go:31] will retry after 665.719599ms: waiting for domain to come up
	I1013 15:40:35.226260 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:35.226976 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:35.227011 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:35.227350 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:35.227378 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:35.227311 1879376 retry.go:31] will retry after 1.182674007s: waiting for domain to come up
	I1013 15:40:36.411792 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:36.412663 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:36.412689 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:36.413013 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:36.413071 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:36.412987 1879376 retry.go:31] will retry after 1.372038869s: waiting for domain to come up
	I1013 15:40:37.787107 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:37.787665 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:37.787687 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:37.788100 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:37.788129 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:37.788041 1879376 retry.go:31] will retry after 1.596227615s: waiting for domain to come up
	I1013 15:40:39.385884 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:39.386796 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:39.386844 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:39.387137 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:39.387170 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:39.387125 1879376 retry.go:31] will retry after 1.590524128s: waiting for domain to come up
	I1013 15:40:40.980098 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:40.981033 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:40.981095 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:40.981747 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:40.981786 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:40.981730 1879376 retry.go:31] will retry after 2.368318019s: waiting for domain to come up
	I1013 15:40:43.353084 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:43.353818 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:43.353851 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:43.354271 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:43.354315 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:43.354218 1879376 retry.go:31] will retry after 3.452503205s: waiting for domain to come up
	I1013 15:40:46.808487 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:46.809364 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | no network interface addresses found for domain default-k8s-diff-port-426789 (source=lease)
	I1013 15:40:46.809397 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | trying to list again with source=arp
	I1013 15:40:46.809769 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find current IP address of domain default-k8s-diff-port-426789 in network mk-default-k8s-diff-port-426789 (interfaces detected: [])
	I1013 15:40:46.809817 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | I1013 15:40:46.809748 1879376 retry.go:31] will retry after 3.609308824s: waiting for domain to come up
	I1013 15:40:50.423360 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:50.424116 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) found domain IP: 192.168.50.176
	I1013 15:40:50.424148 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has current primary IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:50.424157 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) reserving static IP address...
	I1013 15:40:50.424661 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | unable to find host DHCP lease matching {name: "default-k8s-diff-port-426789", mac: "52:54:00:07:df:00", ip: "192.168.50.176"} in network mk-default-k8s-diff-port-426789
	I1013 15:40:50.679369 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Getting to WaitForSSH function...
	I1013 15:40:50.679404 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) reserved static IP address 192.168.50.176 for domain default-k8s-diff-port-426789
	I1013 15:40:50.679445 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) waiting for SSH...
	I1013 15:40:50.683402 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:50.683982 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:07:df:00}
	I1013 15:40:50.684015 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:50.684242 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Using SSH client type: external
	I1013 15:40:50.684277 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa (-rw-------)
	I1013 15:40:50.684390 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:40:50.684417 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | About to run SSH command:
	I1013 15:40:50.684431 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | exit 0
	I1013 15:40:50.830258 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | SSH cmd err, output: <nil>: 
	I1013 15:40:50.830614 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) domain creation complete
	I1013 15:40:50.831087 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetConfigRaw
	I1013 15:40:50.831904 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:40:50.832191 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:40:50.832457 1879347 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1013 15:40:50.832475 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:40:50.834578 1879347 main.go:141] libmachine: Detecting operating system of created instance...
	I1013 15:40:50.834593 1879347 main.go:141] libmachine: Waiting for SSH to be available...
	I1013 15:40:50.834598 1879347 main.go:141] libmachine: Getting to WaitForSSH function...
	I1013 15:40:50.834603 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:40:50.838117 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:50.838590 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:50.838639 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:50.838801 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:40:50.838998 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:50.839195 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:50.839383 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:40:50.839592 1879347 main.go:141] libmachine: Using SSH client type: native
	I1013 15:40:50.839855 1879347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:40:50.839869 1879347 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1013 15:40:50.958955 1879347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 15:40:50.958980 1879347 main.go:141] libmachine: Detecting the provisioner...
	I1013 15:40:50.958988 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:40:50.962919 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:50.963463 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:50.963498 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:50.963641 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:40:50.963875 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:50.964155 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:50.964365 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:40:50.964590 1879347 main.go:141] libmachine: Using SSH client type: native
	I1013 15:40:50.964861 1879347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:40:50.964876 1879347 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1013 15:40:51.081863 1879347 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1013 15:40:51.081951 1879347 main.go:141] libmachine: found compatible host: buildroot
	I1013 15:40:51.081966 1879347 main.go:141] libmachine: Provisioning with buildroot...
	I1013 15:40:51.081981 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:40:51.082315 1879347 buildroot.go:166] provisioning hostname "default-k8s-diff-port-426789"
	I1013 15:40:51.082352 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:40:51.082570 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:40:51.085847 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.086334 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:51.086362 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.086558 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:40:51.086747 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:51.086879 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:51.087094 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:40:51.087316 1879347 main.go:141] libmachine: Using SSH client type: native
	I1013 15:40:51.087581 1879347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:40:51.087595 1879347 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-426789 && echo "default-k8s-diff-port-426789" | sudo tee /etc/hostname
	I1013 15:40:51.231318 1879347 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-426789
	
	I1013 15:40:51.231360 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:40:51.236064 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.236592 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:51.236634 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.236945 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:40:51.237217 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:51.237447 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:51.237638 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:40:51.237878 1879347 main.go:141] libmachine: Using SSH client type: native
	I1013 15:40:51.238101 1879347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:40:51.238128 1879347 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-426789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-426789/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-426789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 15:40:51.372058 1879347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 15:40:51.372104 1879347 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 15:40:51.372157 1879347 buildroot.go:174] setting up certificates
	I1013 15:40:51.372179 1879347 provision.go:84] configureAuth start
	I1013 15:40:51.372200 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:40:51.372565 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:40:51.376329 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.376850 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:51.376876 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.377149 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:40:51.380301 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.380893 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:51.380931 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.381157 1879347 provision.go:143] copyHostCerts
	I1013 15:40:51.381247 1879347 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem, removing ...
	I1013 15:40:51.381274 1879347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem
	I1013 15:40:51.381364 1879347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 15:40:51.381519 1879347 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem, removing ...
	I1013 15:40:51.381537 1879347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem
	I1013 15:40:51.381590 1879347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 15:40:51.381677 1879347 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem, removing ...
	I1013 15:40:51.381687 1879347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem
	I1013 15:40:51.381739 1879347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 15:40:51.381839 1879347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-426789 san=[127.0.0.1 192.168.50.176 default-k8s-diff-port-426789 localhost minikube]
	I1013 15:40:51.740047 1879347 provision.go:177] copyRemoteCerts
	I1013 15:40:51.740113 1879347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 15:40:51.740142 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:40:51.743871 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.744222 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:51.744259 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.744474 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:40:51.744739 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:51.744950 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:40:51.745136 1879347 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:40:51.837359 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 15:40:51.872212 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 15:40:51.906033 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 15:40:51.941022 1879347 provision.go:87] duration metric: took 568.819056ms to configureAuth
	I1013 15:40:51.941061 1879347 buildroot.go:189] setting minikube options for container-runtime
	I1013 15:40:51.941291 1879347 config.go:182] Loaded profile config "default-k8s-diff-port-426789": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:40:51.941335 1879347 main.go:141] libmachine: Checking connection to Docker...
	I1013 15:40:51.941350 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetURL
	I1013 15:40:51.943010 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | using libvirt version 8000000
	I1013 15:40:51.945546 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.946084 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:51.946128 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.946420 1879347 main.go:141] libmachine: Docker is up and running!
	I1013 15:40:51.946453 1879347 main.go:141] libmachine: Reticulating splines...
	I1013 15:40:51.946460 1879347 client.go:171] duration metric: took 20.839845818s to LocalClient.Create
	I1013 15:40:51.946483 1879347 start.go:167] duration metric: took 20.839920231s to libmachine.API.Create "default-k8s-diff-port-426789"
	I1013 15:40:51.946490 1879347 start.go:293] postStartSetup for "default-k8s-diff-port-426789" (driver="kvm2")
	I1013 15:40:51.946500 1879347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 15:40:51.946520 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:40:51.946897 1879347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 15:40:51.946928 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:40:51.950284 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.950705 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:51.950764 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:51.950934 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:40:51.951167 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:51.951306 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:40:51.951448 1879347 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:40:52.047784 1879347 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 15:40:52.053773 1879347 info.go:137] Remote host: Buildroot 2025.02
	I1013 15:40:52.053815 1879347 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 15:40:52.053894 1879347 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 15:40:52.054003 1879347 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem -> 18149272.pem in /etc/ssl/certs
	I1013 15:40:52.054111 1879347 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 15:40:52.069658 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:40:52.105433 1879347 start.go:296] duration metric: took 158.926259ms for postStartSetup
	I1013 15:40:52.105489 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetConfigRaw
	I1013 15:40:52.106390 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:40:52.110388 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:52.110935 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:52.110966 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:52.111385 1879347 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/config.json ...
	I1013 15:40:52.111625 1879347 start.go:128] duration metric: took 21.024838061s to createHost
	I1013 15:40:52.111680 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:40:52.114703 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:52.115121 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:52.115145 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:52.115422 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:40:52.115663 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:52.115865 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:52.116114 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:40:52.116395 1879347 main.go:141] libmachine: Using SSH client type: native
	I1013 15:40:52.116653 1879347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:40:52.116675 1879347 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 15:40:52.232499 1879347 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760370052.201381782
	
	I1013 15:40:52.232525 1879347 fix.go:216] guest clock: 1760370052.201381782
	I1013 15:40:52.232532 1879347 fix.go:229] Guest: 2025-10-13 15:40:52.201381782 +0000 UTC Remote: 2025-10-13 15:40:52.11164672 +0000 UTC m=+21.170153533 (delta=89.735062ms)
	I1013 15:40:52.232576 1879347 fix.go:200] guest clock delta is within tolerance: 89.735062ms
	I1013 15:40:52.232582 1879347 start.go:83] releasing machines lock for "default-k8s-diff-port-426789", held for 21.145941397s
	I1013 15:40:52.232603 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:40:52.232936 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:40:52.236428 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:52.236927 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:52.236960 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:52.237242 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:40:52.237929 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:40:52.238148 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:40:52.238249 1879347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 15:40:52.238316 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:40:52.238386 1879347 ssh_runner.go:195] Run: cat /version.json
	I1013 15:40:52.238410 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:40:52.242204 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:52.242238 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:52.242652 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:52.242695 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:52.242750 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:52.242778 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:52.242861 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:40:52.243070 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:52.243153 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:40:52.243266 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:40:52.243394 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:40:52.243434 1879347 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:40:52.243567 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:40:52.243702 1879347 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:40:52.352424 1879347 ssh_runner.go:195] Run: systemctl --version
	I1013 15:40:52.359578 1879347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 15:40:52.367194 1879347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 15:40:52.367312 1879347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 15:40:52.390892 1879347 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 15:40:52.390922 1879347 start.go:495] detecting cgroup driver to use...
	I1013 15:40:52.391009 1879347 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 15:40:52.425982 1879347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 15:40:52.444376 1879347 docker.go:218] disabling cri-docker service (if available) ...
	I1013 15:40:52.444466 1879347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 15:40:52.466186 1879347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 15:40:52.485116 1879347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 15:40:52.651343 1879347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 15:40:52.871032 1879347 docker.go:234] disabling docker service ...
	I1013 15:40:52.871127 1879347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 15:40:52.891199 1879347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 15:40:52.910644 1879347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 15:40:53.077524 1879347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 15:40:53.241072 1879347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 15:40:53.263978 1879347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 15:40:53.294020 1879347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 15:40:53.309571 1879347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 15:40:53.324628 1879347 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 15:40:53.324733 1879347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 15:40:53.339351 1879347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:40:53.354875 1879347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 15:40:53.374207 1879347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:40:53.390584 1879347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 15:40:53.407420 1879347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 15:40:53.423290 1879347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 15:40:53.438292 1879347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 15:40:53.453637 1879347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 15:40:53.467498 1879347 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 15:40:53.467584 1879347 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 15:40:53.495517 1879347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 15:40:53.510678 1879347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:40:53.667216 1879347 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 15:40:53.721452 1879347 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 15:40:53.721537 1879347 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:40:53.728662 1879347 retry.go:31] will retry after 1.129703971s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 15:40:54.859068 1879347 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:40:54.866546 1879347 start.go:563] Will wait 60s for crictl version
	I1013 15:40:54.866626 1879347 ssh_runner.go:195] Run: which crictl
	I1013 15:40:54.871747 1879347 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 15:40:54.923215 1879347 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 15:40:54.923322 1879347 ssh_runner.go:195] Run: containerd --version
	I1013 15:40:54.958202 1879347 ssh_runner.go:195] Run: containerd --version
	I1013 15:40:54.989866 1879347 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 15:40:54.990882 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:40:54.994793 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:54.995274 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:40:54.995313 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:40:54.995639 1879347 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1013 15:40:55.001109 1879347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:40:55.019661 1879347 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 15:40:55.019847 1879347 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:40:55.019929 1879347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:40:55.065857 1879347 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 15:40:55.065938 1879347 ssh_runner.go:195] Run: which lz4
	I1013 15:40:55.071014 1879347 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 15:40:55.076737 1879347 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 15:40:55.076779 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (409015552 bytes)
	I1013 15:40:56.937841 1879347 containerd.go:563] duration metric: took 1.8668609s to copy over tarball
	I1013 15:40:56.937925 1879347 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 15:40:58.783732 1879347 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.845755316s)
	I1013 15:40:58.783773 1879347 containerd.go:570] duration metric: took 1.845899888s to extract the tarball
	I1013 15:40:58.783783 1879347 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 15:40:58.840187 1879347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:40:59.003835 1879347 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 15:40:59.055846 1879347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:40:59.098252 1879347 retry.go:31] will retry after 137.125804ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T15:40:59Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1013 15:40:59.235647 1879347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:40:59.282889 1879347 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:40:59.282919 1879347 cache_images.go:85] Images are preloaded, skipping loading
	I1013 15:40:59.282927 1879347 kubeadm.go:934] updating node { 192.168.50.176 8444 v1.34.1 containerd true true} ...
	I1013 15:40:59.283123 1879347 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-426789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 15:40:59.283383 1879347 ssh_runner.go:195] Run: sudo crictl info
	I1013 15:40:59.328145 1879347 cni.go:84] Creating CNI manager for ""
	I1013 15:40:59.328172 1879347 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:40:59.328195 1879347 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 15:40:59.328226 1879347 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.176 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-426789 NodeName:default-k8s-diff-port-426789 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 15:40:59.328398 1879347 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.176
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-426789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 15:40:59.328497 1879347 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 15:40:59.342636 1879347 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 15:40:59.342762 1879347 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 15:40:59.355938 1879347 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1013 15:40:59.381303 1879347 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 15:40:59.408773 1879347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2247 bytes)
	I1013 15:40:59.436753 1879347 ssh_runner.go:195] Run: grep 192.168.50.176	control-plane.minikube.internal$ /etc/hosts
	I1013 15:40:59.443295 1879347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:40:59.463858 1879347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:40:59.628517 1879347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:40:59.686520 1879347 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789 for IP: 192.168.50.176
	I1013 15:40:59.686583 1879347 certs.go:195] generating shared ca certs ...
	I1013 15:40:59.686616 1879347 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:40:59.686825 1879347 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 15:40:59.686928 1879347 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 15:40:59.686945 1879347 certs.go:257] generating profile certs ...
	I1013 15:40:59.687103 1879347 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/client.key
	I1013 15:40:59.687141 1879347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/client.crt with IP's: []
	I1013 15:41:00.154230 1879347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/client.crt ...
	I1013 15:41:00.154268 1879347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/client.crt: {Name:mk38f357c4f6e9280ec051944ef9fd203e9dc9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:41:00.154459 1879347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/client.key ...
	I1013 15:41:00.154474 1879347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/client.key: {Name:mkf1f1b451c1a52304745d713b184041c24becd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:41:00.154566 1879347 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key.1e9a3db8
	I1013 15:41:00.154582 1879347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.crt.1e9a3db8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.176]
	I1013 15:41:00.545053 1879347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.crt.1e9a3db8 ...
	I1013 15:41:00.545087 1879347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.crt.1e9a3db8: {Name:mkfde500d0e48c3a2bb7de57272127c979158b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:41:00.545307 1879347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key.1e9a3db8 ...
	I1013 15:41:00.545330 1879347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key.1e9a3db8: {Name:mkbcc72b1d2ab13bf799c3ff6fb884476ef4d729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:41:00.545449 1879347 certs.go:382] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.crt.1e9a3db8 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.crt
	I1013 15:41:00.545580 1879347 certs.go:386] copying /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key.1e9a3db8 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key
	I1013 15:41:00.545669 1879347 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.key
	I1013 15:41:00.545692 1879347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.crt with IP's: []
	I1013 15:41:00.686903 1879347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.crt ...
	I1013 15:41:00.686938 1879347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.crt: {Name:mk013e0031545ab7a060e38d363344e6bc88957b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:41:00.687171 1879347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.key ...
	I1013 15:41:00.687194 1879347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.key: {Name:mk0230754404fc787b007f722823a47e2a6071ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:41:00.687505 1879347 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem (1338 bytes)
	W1013 15:41:00.687562 1879347 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927_empty.pem, impossibly tiny 0 bytes
	I1013 15:41:00.687577 1879347 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 15:41:00.687608 1879347 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 15:41:00.687639 1879347 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 15:41:00.687668 1879347 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 15:41:00.687736 1879347 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:41:00.688446 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 15:41:00.725980 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 15:41:00.763977 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 15:41:00.800018 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 15:41:00.835263 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 15:41:00.870239 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 15:41:00.907978 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 15:41:00.949264 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 15:41:00.986769 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem --> /usr/share/ca-certificates/1814927.pem (1338 bytes)
	I1013 15:41:01.020865 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /usr/share/ca-certificates/18149272.pem (1708 bytes)
	I1013 15:41:01.056423 1879347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 15:41:01.102942 1879347 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 15:41:01.130274 1879347 ssh_runner.go:195] Run: openssl version
	I1013 15:41:01.139476 1879347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1814927.pem && ln -fs /usr/share/ca-certificates/1814927.pem /etc/ssl/certs/1814927.pem"
	I1013 15:41:01.158912 1879347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1814927.pem
	I1013 15:41:01.164731 1879347 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 14:22 /usr/share/ca-certificates/1814927.pem
	I1013 15:41:01.164824 1879347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1814927.pem
	I1013 15:41:01.172853 1879347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1814927.pem /etc/ssl/certs/51391683.0"
	I1013 15:41:01.188491 1879347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18149272.pem && ln -fs /usr/share/ca-certificates/18149272.pem /etc/ssl/certs/18149272.pem"
	I1013 15:41:01.203174 1879347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18149272.pem
	I1013 15:41:01.209757 1879347 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 14:22 /usr/share/ca-certificates/18149272.pem
	I1013 15:41:01.209833 1879347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18149272.pem
	I1013 15:41:01.218101 1879347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18149272.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 15:41:01.232481 1879347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 15:41:01.246735 1879347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:41:01.252943 1879347 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:41:01.253028 1879347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:41:01.261309 1879347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 15:41:01.279015 1879347 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 15:41:01.284655 1879347 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 15:41:01.284747 1879347 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:41:01.284850 1879347 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 15:41:01.284914 1879347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:41:01.325111 1879347 cri.go:89] found id: ""
	I1013 15:41:01.325185 1879347 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 15:41:01.339351 1879347 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 15:41:01.352861 1879347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 15:41:01.366781 1879347 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 15:41:01.366805 1879347 kubeadm.go:157] found existing configuration files:
	
	I1013 15:41:01.366872 1879347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1013 15:41:01.379850 1879347 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 15:41:01.379938 1879347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 15:41:01.396526 1879347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1013 15:41:01.409603 1879347 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 15:41:01.409690 1879347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 15:41:01.424212 1879347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1013 15:41:01.437225 1879347 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 15:41:01.437301 1879347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 15:41:01.454303 1879347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1013 15:41:01.467181 1879347 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 15:41:01.467306 1879347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 15:41:01.481659 1879347 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 15:41:01.546908 1879347 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 15:41:01.546997 1879347 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 15:41:01.654016 1879347 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 15:41:01.654118 1879347 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 15:41:01.654200 1879347 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 15:41:01.666359 1879347 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 15:41:01.668458 1879347 out.go:252]   - Generating certificates and keys ...
	I1013 15:41:01.668563 1879347 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 15:41:01.668695 1879347 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 15:41:01.917500 1879347 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 15:41:02.026611 1879347 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 15:41:02.286853 1879347 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 15:41:02.603298 1879347 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 15:41:02.862793 1879347 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 15:41:02.863002 1879347 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-426789 localhost] and IPs [192.168.50.176 127.0.0.1 ::1]
	I1013 15:41:03.370635 1879347 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 15:41:03.370862 1879347 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-426789 localhost] and IPs [192.168.50.176 127.0.0.1 ::1]
	I1013 15:41:03.579420 1879347 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 15:41:04.023299 1879347 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 15:41:04.773565 1879347 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 15:41:04.774489 1879347 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 15:41:05.087091 1879347 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 15:41:05.457588 1879347 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 15:41:05.980843 1879347 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 15:41:06.267967 1879347 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 15:41:06.435349 1879347 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 15:41:06.436077 1879347 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 15:41:06.443026 1879347 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 15:41:06.445093 1879347 out.go:252]   - Booting up control plane ...
	I1013 15:41:06.445210 1879347 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 15:41:06.445304 1879347 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 15:41:06.445366 1879347 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 15:41:06.475660 1879347 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 15:41:06.475831 1879347 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 15:41:06.484965 1879347 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 15:41:06.485144 1879347 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 15:41:06.485337 1879347 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 15:41:06.693521 1879347 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 15:41:06.693739 1879347 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 15:41:07.194227 1879347 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.314163ms
	I1013 15:41:07.199762 1879347 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 15:41:07.199887 1879347 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.50.176:8444/livez
	I1013 15:41:07.199993 1879347 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 15:41:07.200093 1879347 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 15:41:09.619443 1879347 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.420072073s
	I1013 15:41:11.155307 1879347 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.957064454s
	I1013 15:41:13.199443 1879347 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001197809s
	I1013 15:41:13.217073 1879347 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 15:41:13.239467 1879347 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 15:41:13.257909 1879347 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 15:41:13.258668 1879347 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-426789 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 15:41:13.272209 1879347 kubeadm.go:318] [bootstrap-token] Using token: azm75x.g56juw1wx0q7fjnv
	I1013 15:41:13.273581 1879347 out.go:252]   - Configuring RBAC rules ...
	I1013 15:41:13.273756 1879347 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 15:41:13.281655 1879347 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 15:41:13.298347 1879347 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 15:41:13.305629 1879347 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 15:41:13.309247 1879347 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 15:41:13.313041 1879347 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 15:41:13.606282 1879347 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 15:41:14.064627 1879347 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 15:41:14.607688 1879347 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 15:41:14.611521 1879347 kubeadm.go:318] 
	I1013 15:41:14.611602 1879347 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 15:41:14.611613 1879347 kubeadm.go:318] 
	I1013 15:41:14.611691 1879347 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 15:41:14.611701 1879347 kubeadm.go:318] 
	I1013 15:41:14.611766 1879347 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 15:41:14.613041 1879347 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 15:41:14.613096 1879347 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 15:41:14.613103 1879347 kubeadm.go:318] 
	I1013 15:41:14.613170 1879347 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 15:41:14.613181 1879347 kubeadm.go:318] 
	I1013 15:41:14.613276 1879347 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 15:41:14.613298 1879347 kubeadm.go:318] 
	I1013 15:41:14.613372 1879347 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 15:41:14.613475 1879347 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 15:41:14.613542 1879347 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 15:41:14.613548 1879347 kubeadm.go:318] 
	I1013 15:41:14.613657 1879347 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 15:41:14.613768 1879347 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 15:41:14.613780 1879347 kubeadm.go:318] 
	I1013 15:41:14.613893 1879347 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token azm75x.g56juw1wx0q7fjnv \
	I1013 15:41:14.614026 1879347 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa \
	I1013 15:41:14.614062 1879347 kubeadm.go:318] 	--control-plane 
	I1013 15:41:14.614076 1879347 kubeadm.go:318] 
	I1013 15:41:14.614196 1879347 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 15:41:14.614234 1879347 kubeadm.go:318] 
	I1013 15:41:14.614334 1879347 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token azm75x.g56juw1wx0q7fjnv \
	I1013 15:41:14.614450 1879347 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:63e177a29292380fb826570633ef268f489341be04e82d74b67689b7780890fa 
	I1013 15:41:14.619623 1879347 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 15:41:14.619666 1879347 cni.go:84] Creating CNI manager for ""
	I1013 15:41:14.619676 1879347 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:41:14.621567 1879347 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 15:41:14.623082 1879347 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 15:41:14.641348 1879347 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 15:41:14.677252 1879347 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 15:41:14.677388 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:14.677389 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-426789 minikube.k8s.io/updated_at=2025_10_13T15_41_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=default-k8s-diff-port-426789 minikube.k8s.io/primary=true
	I1013 15:41:14.714206 1879347 ops.go:34] apiserver oom_adj: -16
	I1013 15:41:14.856929 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:15.357920 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:15.857938 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:16.357449 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:16.857884 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:17.357963 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:17.857937 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:18.357059 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:18.857904 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:19.357884 1879347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 15:41:19.517046 1879347 kubeadm.go:1113] duration metric: took 4.839725507s to wait for elevateKubeSystemPrivileges
	I1013 15:41:19.517113 1879347 kubeadm.go:402] duration metric: took 18.232374955s to StartCluster
	I1013 15:41:19.517140 1879347 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:41:19.517233 1879347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:41:19.519490 1879347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:41:19.519789 1879347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 15:41:19.519804 1879347 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:41:19.519889 1879347 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 15:41:19.519985 1879347 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-426789"
	I1013 15:41:19.520021 1879347 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-426789"
	I1013 15:41:19.520031 1879347 config.go:182] Loaded profile config "default-k8s-diff-port-426789": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:41:19.520057 1879347 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-426789"
	I1013 15:41:19.520105 1879347 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-426789"
	I1013 15:41:19.520063 1879347 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:41:19.520728 1879347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:41:19.520744 1879347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:41:19.520786 1879347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:41:19.520793 1879347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:41:19.521507 1879347 out.go:179] * Verifying Kubernetes components...
	I1013 15:41:19.523174 1879347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:41:19.537036 1879347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42821
	I1013 15:41:19.537049 1879347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I1013 15:41:19.537661 1879347 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:41:19.537746 1879347 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:41:19.538207 1879347 main.go:141] libmachine: Using API Version  1
	I1013 15:41:19.538219 1879347 main.go:141] libmachine: Using API Version  1
	I1013 15:41:19.538228 1879347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:41:19.538242 1879347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:41:19.538691 1879347 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:41:19.538795 1879347 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:41:19.538963 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:41:19.539401 1879347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:41:19.539430 1879347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:41:19.544224 1879347 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-426789"
	I1013 15:41:19.544280 1879347 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:41:19.544663 1879347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:41:19.544701 1879347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:41:19.555867 1879347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38107
	I1013 15:41:19.556532 1879347 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:41:19.557216 1879347 main.go:141] libmachine: Using API Version  1
	I1013 15:41:19.557245 1879347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:41:19.557683 1879347 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:41:19.557971 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:41:19.560606 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:41:19.561836 1879347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35231
	I1013 15:41:19.562403 1879347 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:41:19.563045 1879347 main.go:141] libmachine: Using API Version  1
	I1013 15:41:19.563077 1879347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:41:19.563141 1879347 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 15:41:19.563580 1879347 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:41:19.564349 1879347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:41:19.564414 1879347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:41:19.564675 1879347 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:41:19.564697 1879347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 15:41:19.564733 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:41:19.569829 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:41:19.570461 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:41:19.570498 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:41:19.570851 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:41:19.571101 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:41:19.571364 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:41:19.571626 1879347 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:41:19.583070 1879347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I1013 15:41:19.583625 1879347 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:41:19.584203 1879347 main.go:141] libmachine: Using API Version  1
	I1013 15:41:19.584241 1879347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:41:19.584778 1879347 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:41:19.584986 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:41:19.587233 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:41:19.587524 1879347 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 15:41:19.587547 1879347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 15:41:19.587572 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:41:19.592006 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:41:19.592626 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:40:47 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:41:19.592654 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:41:19.593119 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:41:19.593358 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:41:19.593551 1879347 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:41:19.593707 1879347 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:41:19.902317 1879347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:41:19.902317 1879347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 15:41:20.195241 1879347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 15:41:20.196837 1879347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	1e2ca24113eac       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   80d4daeefd3f6       dashboard-metrics-scraper-6ffb444bf9-6v4dm
	a1eeedac0325f       6e38f40d628db       8 minutes ago       Running             storage-provisioner         3                   48a056cd7065e       storage-provisioner
	5c2c9b6372899       52546a367cc9e       9 minutes ago       Running             coredns                     1                   e92c76fb8e45e       coredns-66bc5c9577-rmhlp
	4c38adb34c612       56cc512116c8f       9 minutes ago       Running             busybox                     1                   f6ae93be9ab01       busybox
	034ea310c76d5       fc25172553d79       9 minutes ago       Running             kube-proxy                  1                   e0e58aa347e2c       kube-proxy-qlfhm
	e93a05bb96f31       6e38f40d628db       9 minutes ago       Exited              storage-provisioner         2                   48a056cd7065e       storage-provisioner
	30195cdffd020       5f1f5298c888d       9 minutes ago       Running             etcd                        1                   440fc426ed820       etcd-embed-certs-516717
	19ae15867a847       7dd6aaa1717ab       9 minutes ago       Running             kube-scheduler              1                   0ade695d1e97c       kube-scheduler-embed-certs-516717
	253c31b6993f1       c80c8dbafe7dd       9 minutes ago       Running             kube-controller-manager     1                   e287ad4b2e531       kube-controller-manager-embed-certs-516717
	64693c2aa9a7a       c3994bc696102       9 minutes ago       Running             kube-apiserver              1                   886273bfdf7ad       kube-apiserver-embed-certs-516717
	c24ee29935db3       56cc512116c8f       11 minutes ago      Exited              busybox                     0                   7222d29395163       busybox
	3e9260910496e       52546a367cc9e       11 minutes ago      Exited              coredns                     0                   feeba6515ec1b       coredns-66bc5c9577-rmhlp
	f6d977cc58b31       fc25172553d79       11 minutes ago      Exited              kube-proxy                  0                   14fcc1ab00813       kube-proxy-qlfhm
	8a6eeb04ec582       7dd6aaa1717ab       12 minutes ago      Exited              kube-scheduler              0                   f3ed40a9ebdb6       kube-scheduler-embed-certs-516717
	5324240631f01       5f1f5298c888d       12 minutes ago      Exited              etcd                        0                   50cd8b9208e1e       etcd-embed-certs-516717
	a16aad8a0a4ea       c80c8dbafe7dd       12 minutes ago      Exited              kube-controller-manager     0                   d95879f894097       kube-controller-manager-embed-certs-516717
	d35b2999e2920       c3994bc696102       12 minutes ago      Exited              kube-apiserver              0                   796cb1205a263       kube-apiserver-embed-certs-516717
	
	
	==> containerd <==
	Oct 13 15:35:26 embed-certs-516717 containerd[721]: time="2025-10-13T15:35:26.460857315Z" level=info msg="StartContainer for \"a2acfc894945152890c6105c9e792062540efd547b4cbd9845634eb7ed7513c3\""
	Oct 13 15:35:26 embed-certs-516717 containerd[721]: time="2025-10-13T15:35:26.543511584Z" level=info msg="StartContainer for \"a2acfc894945152890c6105c9e792062540efd547b4cbd9845634eb7ed7513c3\" returns successfully"
	Oct 13 15:35:26 embed-certs-516717 containerd[721]: time="2025-10-13T15:35:26.596056011Z" level=info msg="shim disconnected" id=a2acfc894945152890c6105c9e792062540efd547b4cbd9845634eb7ed7513c3 namespace=k8s.io
	Oct 13 15:35:26 embed-certs-516717 containerd[721]: time="2025-10-13T15:35:26.596138736Z" level=warning msg="cleaning up after shim disconnected" id=a2acfc894945152890c6105c9e792062540efd547b4cbd9845634eb7ed7513c3 namespace=k8s.io
	Oct 13 15:35:26 embed-certs-516717 containerd[721]: time="2025-10-13T15:35:26.596148642Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:35:27 embed-certs-516717 containerd[721]: time="2025-10-13T15:35:27.437722130Z" level=info msg="RemoveContainer for \"7842e0ca9e01031fdc5e8f1297cc24d7634b8fd46db12aeb5a3d0a63f43656e0\""
	Oct 13 15:35:27 embed-certs-516717 containerd[721]: time="2025-10-13T15:35:27.448762139Z" level=info msg="RemoveContainer for \"7842e0ca9e01031fdc5e8f1297cc24d7634b8fd46db12aeb5a3d0a63f43656e0\" returns successfully"
	Oct 13 15:37:54 embed-certs-516717 containerd[721]: time="2025-10-13T15:37:54.428214425Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 15:37:54 embed-certs-516717 containerd[721]: time="2025-10-13T15:37:54.431423985Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:37:54 embed-certs-516717 containerd[721]: time="2025-10-13T15:37:54.508339611Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:37:54 embed-certs-516717 containerd[721]: time="2025-10-13T15:37:54.613402592Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 15:37:54 embed-certs-516717 containerd[721]: time="2025-10-13T15:37:54.613499075Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 13 15:38:07 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:07.426673163Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 13 15:38:07 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:07.431136225Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Oct 13 15:38:07 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:07.433961490Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Oct 13 15:38:07 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:07.434152518Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 13 15:38:10 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:10.435255123Z" level=info msg="CreateContainer within sandbox \"80d4daeefd3f6b9e1b7f1b7176dc8514a204217b4515f7d21d5c10d6db327475\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Oct 13 15:38:10 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:10.467462545Z" level=info msg="CreateContainer within sandbox \"80d4daeefd3f6b9e1b7f1b7176dc8514a204217b4515f7d21d5c10d6db327475\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc\""
	Oct 13 15:38:10 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:10.468950736Z" level=info msg="StartContainer for \"1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc\""
	Oct 13 15:38:10 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:10.561950517Z" level=info msg="StartContainer for \"1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc\" returns successfully"
	Oct 13 15:38:10 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:10.624132735Z" level=info msg="shim disconnected" id=1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc namespace=k8s.io
	Oct 13 15:38:10 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:10.624193287Z" level=warning msg="cleaning up after shim disconnected" id=1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc namespace=k8s.io
	Oct 13 15:38:10 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:10.624207455Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:38:11 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:11.011846485Z" level=info msg="RemoveContainer for \"a2acfc894945152890c6105c9e792062540efd547b4cbd9845634eb7ed7513c3\""
	Oct 13 15:38:11 embed-certs-516717 containerd[721]: time="2025-10-13T15:38:11.021845575Z" level=info msg="RemoveContainer for \"a2acfc894945152890c6105c9e792062540efd547b4cbd9845634eb7ed7513c3\" returns successfully"
	
	
	==> coredns [3e9260910496e46f9f0c111e0059c1b373d41c5cdde09da39ee51382040eaf23] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53173 - 26862 "HINFO IN 1089811145681660908.3981688596191647616. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041911236s
	
	
	==> coredns [5c2c9b6372899c44edae22b6cbdc9827e04d6faf9308b6eb5c4004430a47509b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58225 - 828 "HINFO IN 1646723403474265242.2092770015904884699. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.442781572s
	
	
	==> describe nodes <==
	Name:               embed-certs-516717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-516717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=embed-certs-516717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T15_29_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 15:29:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-516717
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 15:41:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 15:38:31 +0000   Mon, 13 Oct 2025 15:29:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 15:38:31 +0000   Mon, 13 Oct 2025 15:29:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 15:38:31 +0000   Mon, 13 Oct 2025 15:29:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 15:38:31 +0000   Mon, 13 Oct 2025 15:32:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.104
	  Hostname:    embed-certs-516717
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c44e321cdeb4ff5be4320e6af8af446
	  System UUID:                9c44e321-cdeb-4ff5-be43-20e6af8af446
	  Boot ID:                    b3404ab9-a97a-4475-a450-eca21836404e
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-rmhlp                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     11m
	  kube-system                 etcd-embed-certs-516717                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-embed-certs-516717             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-embed-certs-516717    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-qlfhm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-embed-certs-516717             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-746fcd58dc-qp476               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6v4dm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v4zfv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 9m16s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node embed-certs-516717 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node embed-certs-516717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node embed-certs-516717 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeReady                12m                    kubelet          Node embed-certs-516717 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node embed-certs-516717 event: Registered Node embed-certs-516717 in Controller
	  Normal   Starting                 9m24s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node embed-certs-516717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node embed-certs-516717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node embed-certs-516717 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m19s                  kubelet          Node embed-certs-516717 has been rebooted, boot id: b3404ab9-a97a-4475-a450-eca21836404e
	  Normal   RegisteredNode           9m13s                  node-controller  Node embed-certs-516717 event: Registered Node embed-certs-516717 in Controller
	
	
	==> dmesg <==
	[Oct13 15:31] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000066] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002500] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.722647] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.113075] kauditd_printk_skb: 85 callbacks suppressed
	[  +1.495364] kauditd_printk_skb: 184 callbacks suppressed
	[Oct13 15:32] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.756221] kauditd_printk_skb: 161 callbacks suppressed
	[  +1.564700] kauditd_printk_skb: 203 callbacks suppressed
	[  +2.734520] kauditd_printk_skb: 47 callbacks suppressed
	[ +13.637386] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.008528] kauditd_printk_skb: 7 callbacks suppressed
	[Oct13 15:33] kauditd_printk_skb: 5 callbacks suppressed
	[ +46.995379] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:35] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:38] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [30195cdffd02082b7047e0c85252c7e56a0060292e9ebf661b6cd944d9330f5d] <==
	{"level":"warn","ts":"2025-10-13T15:32:02.018649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.036260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.048548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.075945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.087005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.109435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.137296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.148590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.157491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.170187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.180715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.188972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.200561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.215290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.235666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.245410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.267324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.276860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.310686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.332433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.343764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.410674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:40:59.764289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.701182ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:40:59.765368Z","caller":"traceutil/trace.go:172","msg":"trace[1376892760] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1253; }","duration":"205.879941ms","start":"2025-10-13T15:40:59.559437Z","end":"2025-10-13T15:40:59.765317Z","steps":["trace[1376892760] 'range keys from in-memory index tree'  (duration: 204.648562ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:41:05.629395Z","caller":"traceutil/trace.go:172","msg":"trace[646268734] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"155.227782ms","start":"2025-10-13T15:41:05.474137Z","end":"2025-10-13T15:41:05.629365Z","steps":["trace[646268734] 'process raft request'  (duration: 155.075037ms)"],"step_count":1}
	
	
	==> etcd [5324240631f0124ec67ecac97c2f41c9450cd94c9b5cf7b963229f7309505980] <==
	{"level":"warn","ts":"2025-10-13T15:29:14.211822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.223256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.241896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.250471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.268566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.276180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.297987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.317165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.323784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.336140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.351093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.364462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.375651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.389820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.408129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.425708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.440898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.453833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.474213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.485611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.501971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.515719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.525842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.535503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.631437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35926","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:41:22 up 9 min,  0 users,  load average: 0.01, 0.11, 0.09
	Linux embed-certs-516717 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [64693c2aa9a7a7e7ce82c85685b50d56b40f62d945052f36e56c2bf1a75e2340] <==
	I1013 15:37:04.375723       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:37:04.375922       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:37:04.376100       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:37:04.377387       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:38:04.376156       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:38:04.376238       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:38:04.376270       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:38:04.378552       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:38:04.379088       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:38:04.379278       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:40:04.377647       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:40:04.377745       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:40:04.377769       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:40:04.382196       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:40:04.382369       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:40:04.382388       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d35b2999e2920b182c31a06864472634271623f1ed67c5ee3fada7fc56276d8f] <==
	I1013 15:29:18.641266       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 15:29:18.691890       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 15:29:24.378999       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 15:29:24.404907       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 15:29:24.482511       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 15:29:24.554954       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1013 15:30:09.729988       1 conn.go:339] Error on socket receive: read tcp 192.168.72.104:8443->192.168.72.1:36100: use of closed network connection
	I1013 15:30:10.497167       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1013 15:30:10.511257       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:30:10.511514       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1013 15:30:10.511751       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1013 15:30:10.701606       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.100.175.174"}
	W1013 15:30:10.713902       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:30:10.714602       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1013 15:30:10.737041       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:30:10.737122       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [253c31b6993f10c24713a2cdee3e3f43eab29fa6059b115ba92dcf14fd7bbf21] <==
	I1013 15:35:09.454666       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:35:39.364448       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:35:39.465669       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:36:09.374655       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:36:09.477819       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:36:39.398794       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:36:39.493654       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:37:09.405311       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:37:09.508431       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:37:39.412410       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:37:39.524590       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:38:09.420707       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:38:09.535617       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:38:39.431316       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:38:39.546129       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:39:09.437002       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:39:09.560199       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:39:39.443107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:39:39.570894       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:40:09.450528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:40:09.579630       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:40:39.460650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:40:39.591620       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:41:09.467703       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:41:09.603518       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [a16aad8a0a4ea4024ab693deeee7eb7f373d8299630cbe16ddfcb4eacba83924] <==
	I1013 15:29:23.445544       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 15:29:23.445770       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 15:29:23.445915       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 15:29:23.422795       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 15:29:23.422802       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 15:29:23.452347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 15:29:23.461991       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 15:29:23.466681       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 15:29:23.467039       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 15:29:23.467049       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 15:29:23.468007       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 15:29:23.468205       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 15:29:23.469918       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 15:29:23.470453       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 15:29:23.472410       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 15:29:23.477358       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 15:29:23.477372       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 15:29:23.484258       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 15:29:23.484357       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 15:29:23.484731       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-516717" podCIDRs=["10.244.0.0/24"]
	I1013 15:29:23.487549       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 15:29:23.517387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 15:29:23.517453       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 15:29:23.517460       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 15:29:23.563235       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [034ea310c76d53f3bcc7338d487d2d4f20c163467ba205608f981b10996fa6dd] <==
	I1013 15:32:05.760004       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 15:32:05.860557       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 15:32:05.860817       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.104"]
	E1013 15:32:05.861940       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 15:32:05.927834       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 15:32:05.928056       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:32:05.928487       1 server_linux.go:132] "Using iptables Proxier"
	I1013 15:32:05.940462       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 15:32:05.942190       1 server.go:527] "Version info" version="v1.34.1"
	I1013 15:32:05.942235       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:32:05.949559       1 config.go:200] "Starting service config controller"
	I1013 15:32:05.949662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 15:32:05.950163       1 config.go:106] "Starting endpoint slice config controller"
	I1013 15:32:05.950172       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 15:32:05.950280       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 15:32:05.950294       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 15:32:05.958499       1 config.go:309] "Starting node config controller"
	I1013 15:32:05.958533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 15:32:05.958542       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 15:32:06.050758       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 15:32:06.050815       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 15:32:06.053929       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f6d977cc58b317b9be2991e680b77068e09df90adedd531606b0a01dc5e2a409] <==
	I1013 15:29:26.189119       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 15:29:26.296992       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 15:29:26.297045       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.104"]
	E1013 15:29:26.297836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 15:29:26.471542       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 15:29:26.472442       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:29:26.472596       1 server_linux.go:132] "Using iptables Proxier"
	I1013 15:29:26.488717       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 15:29:26.489962       1 server.go:527] "Version info" version="v1.34.1"
	I1013 15:29:26.489994       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:29:26.503053       1 config.go:200] "Starting service config controller"
	I1013 15:29:26.503097       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 15:29:26.503127       1 config.go:106] "Starting endpoint slice config controller"
	I1013 15:29:26.503133       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 15:29:26.503150       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 15:29:26.503156       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 15:29:26.504548       1 config.go:309] "Starting node config controller"
	I1013 15:29:26.504575       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 15:29:26.504582       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 15:29:26.603847       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 15:29:26.604190       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 15:29:26.604411       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [19ae15867a847a8163ed2c6d37159c5b71da4795a2b238627c44ea94ae551555] <==
	I1013 15:32:01.995333       1 serving.go:386] Generated self-signed cert in-memory
	W1013 15:32:03.347300       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 15:32:03.347400       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 15:32:03.347415       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 15:32:03.348101       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 15:32:03.462815       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 15:32:03.467695       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:32:03.473134       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:32:03.473619       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:32:03.478883       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 15:32:03.479268       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 15:32:03.574770       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [8a6eeb04ec5821eeaf74ba0e78207ad2cd27bf89df2419de7f4e31e12a209a77] <==
	E1013 15:29:15.556970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 15:29:15.557371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 15:29:15.559353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 15:29:15.559951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 15:29:15.559262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 15:29:15.560246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 15:29:15.560346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 15:29:15.560401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 15:29:16.387743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 15:29:16.402126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 15:29:16.457418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 15:29:16.457418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 15:29:16.519611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 15:29:16.552818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 15:29:16.572076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 15:29:16.728623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 15:29:16.731040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 15:29:16.742824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 15:29:16.803800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 15:29:16.839331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 15:29:16.972398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 15:29:17.010393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 15:29:17.021635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 15:29:17.035244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1013 15:29:19.113174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 15:40:03 embed-certs-516717 kubelet[1040]: E1013 15:40:03.427129    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:40:12 embed-certs-516717 kubelet[1040]: E1013 15:40:12.430960    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:40:13 embed-certs-516717 kubelet[1040]: I1013 15:40:13.424901    1040 scope.go:117] "RemoveContainer" containerID="1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc"
	Oct 13 15:40:13 embed-certs-516717 kubelet[1040]: E1013 15:40:13.425276    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:40:15 embed-certs-516717 kubelet[1040]: E1013 15:40:15.426774    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:40:26 embed-certs-516717 kubelet[1040]: I1013 15:40:26.426200    1040 scope.go:117] "RemoveContainer" containerID="1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc"
	Oct 13 15:40:26 embed-certs-516717 kubelet[1040]: E1013 15:40:26.426419    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:40:26 embed-certs-516717 kubelet[1040]: E1013 15:40:26.427478    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:40:28 embed-certs-516717 kubelet[1040]: E1013 15:40:28.426745    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:40:37 embed-certs-516717 kubelet[1040]: E1013 15:40:37.427463    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:40:40 embed-certs-516717 kubelet[1040]: E1013 15:40:40.425924    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:40:41 embed-certs-516717 kubelet[1040]: I1013 15:40:41.425287    1040 scope.go:117] "RemoveContainer" containerID="1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc"
	Oct 13 15:40:41 embed-certs-516717 kubelet[1040]: E1013 15:40:41.425594    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:40:48 embed-certs-516717 kubelet[1040]: E1013 15:40:48.427346    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:40:54 embed-certs-516717 kubelet[1040]: I1013 15:40:54.424372    1040 scope.go:117] "RemoveContainer" containerID="1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc"
	Oct 13 15:40:54 embed-certs-516717 kubelet[1040]: E1013 15:40:54.424659    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:40:55 embed-certs-516717 kubelet[1040]: E1013 15:40:55.426877    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:40:59 embed-certs-516717 kubelet[1040]: E1013 15:40:59.426492    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:41:05 embed-certs-516717 kubelet[1040]: I1013 15:41:05.424436    1040 scope.go:117] "RemoveContainer" containerID="1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc"
	Oct 13 15:41:05 embed-certs-516717 kubelet[1040]: E1013 15:41:05.424709    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:41:10 embed-certs-516717 kubelet[1040]: E1013 15:41:10.426979    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:41:11 embed-certs-516717 kubelet[1040]: E1013 15:41:11.426839    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:41:19 embed-certs-516717 kubelet[1040]: I1013 15:41:19.423939    1040 scope.go:117] "RemoveContainer" containerID="1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc"
	Oct 13 15:41:19 embed-certs-516717 kubelet[1040]: E1013 15:41:19.424207    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:41:21 embed-certs-516717 kubelet[1040]: E1013 15:41:21.428520    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	
	
	==> storage-provisioner [a1eeedac0325f3ca4472865170525536db210d669cc7996f65820d724d30f4c2] <==
	W1013 15:40:57.336652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:59.342320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:40:59.437371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:01.442695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:01.455589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:03.460379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:03.467069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:05.471222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:05.631920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:07.637867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:07.646288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:09.651513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:09.658967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:11.663846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:11.673216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:13.677093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:13.682782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:15.687964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:15.694549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:17.698852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:17.705311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:19.709146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:19.722826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:21.728975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:41:21.746297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e93a05bb96f31fdbf4186d41077f4f8e665dbf0ddaa6b77822ff6d870340c78b] <==
	I1013 15:32:05.319753       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 15:32:35.338617       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-516717 -n embed-certs-516717
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-516717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-qp476 kubernetes-dashboard-855c9754f9-v4zfv
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-516717 describe pod metrics-server-746fcd58dc-qp476 kubernetes-dashboard-855c9754f9-v4zfv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-516717 describe pod metrics-server-746fcd58dc-qp476 kubernetes-dashboard-855c9754f9-v4zfv: exit status 1 (67.435774ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-qp476" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-v4zfv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-516717 describe pod metrics-server-746fcd58dc-qp476 kubernetes-dashboard-855c9754f9-v4zfv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (543.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dqs5m" [3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1013 15:41:07.248478 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673307 -n no-preload-673307
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-10-13 15:49:51.583377319 +0000 UTC m=+6882.523935691
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-673307 describe po kubernetes-dashboard-855c9754f9-dqs5m -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context no-preload-673307 describe po kubernetes-dashboard-855c9754f9-dqs5m -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-dqs5m
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-673307/192.168.61.180
Start Time:       Mon, 13 Oct 2025 15:31:42 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lmpr6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-lmpr6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                  From               Message
----     ------            ----                 ----               -------
Warning  FailedScheduling  18m                  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         18m                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m to no-preload-673307
Warning  Failed            16m                  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling           15m (x5 over 18m)    kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            15m (x4 over 18m)    kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed            15m (x5 over 18m)    kubelet            Error: ErrImagePull
Normal   BackOff           3m7s (x64 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            3m7s (x64 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-673307 logs kubernetes-dashboard-855c9754f9-dqs5m -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-673307 logs kubernetes-dashboard-855c9754f9-dqs5m -n kubernetes-dashboard: exit status 1 (91.126951ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-dqs5m" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context no-preload-673307 logs kubernetes-dashboard-855c9754f9-dqs5m -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-673307 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673307 -n no-preload-673307
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-673307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-673307 logs -n 25: (1.754077758s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬────
─────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                     ARGS                                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼────
─────────────────┼─────────────────────┤
	│ ssh     │ -p calico-045564 sudo systemctl cat crio --no-pager                                                                                                                                                                                                                           │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                                                 │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ ssh     │ -p calico-045564 sudo crio config                                                                                                                                                                                                                                             │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ delete  │ -p calico-045564                                                                                                                                                                                                                                                              │ calico-045564                │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ delete  │ -p disable-driver-mounts-917680                                                                                                                                                                                                                                               │ disable-driver-mounts-917680 │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:40 UTC │
	│ start   │ -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-426789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                            │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ stop    │ -p default-k8s-diff-port-426789 --alsologtostderr -v=3                                                                                                                                                                                                                        │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:43 UTC │
	│ image   │ old-k8s-version-316150 image list --format=json                                                                                                                                                                                                                               │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ pause   │ -p old-k8s-version-316150 --alsologtostderr -v=1                                                                                                                                                                                                                              │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ unpause │ -p old-k8s-version-316150 --alsologtostderr -v=1                                                                                                                                                                                                                              │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ delete  │ -p old-k8s-version-316150                                                                                                                                                                                                                                                     │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ delete  │ -p old-k8s-version-316150                                                                                                                                                                                                                                                     │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ start   │ -p newest-cni-400509 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:43 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-426789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                                       │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ start   │ -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-400509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                                       │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ stop    │ -p newest-cni-400509 --alsologtostderr -v=3                                                                                                                                                                                                                                   │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-400509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                                                  │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ start   │ -p newest-cni-400509 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:44 UTC │
	│ image   │ newest-cni-400509 image list --format=json                                                                                                                                                                                                                                    │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ pause   │ -p newest-cni-400509 --alsologtostderr -v=1                                                                                                                                                                                                                                   │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ unpause │ -p newest-cni-400509 --alsologtostderr -v=1                                                                                                                                                                                                                                   │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ delete  │ -p newest-cni-400509                                                                                                                                                                                                                                                          │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ delete  │ -p newest-cni-400509                                                                                                                                                                                                                                                          │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴────
─────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 15:43:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 15:43:36.713594 1881569 out.go:360] Setting OutFile to fd 1 ...
	I1013 15:43:36.713867 1881569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:43:36.713876 1881569 out.go:374] Setting ErrFile to fd 2...
	I1013 15:43:36.713881 1881569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:43:36.714128 1881569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 15:43:36.714601 1881569 out.go:368] Setting JSON to false
	I1013 15:43:36.715659 1881569 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":26765,"bootTime":1760343452,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 15:43:36.715764 1881569 start.go:141] virtualization: kvm guest
	I1013 15:43:36.717879 1881569 out.go:179] * [newest-cni-400509] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 15:43:36.719306 1881569 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 15:43:36.719352 1881569 notify.go:220] Checking for updates...
	I1013 15:43:36.722297 1881569 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 15:43:36.723784 1881569 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:43:36.728380 1881569 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:43:36.729831 1881569 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 15:43:36.731178 1881569 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 15:43:36.733044 1881569 config.go:182] Loaded profile config "newest-cni-400509": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:36.733466 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:36.733553 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:36.748649 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1013 15:43:36.749362 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:36.749950 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:43:36.749983 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:36.750498 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:36.750765 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:36.751059 1881569 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 15:43:36.751384 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:36.751424 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:36.766235 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I1013 15:43:36.766738 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:36.767297 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:43:36.767322 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:36.767684 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:36.767908 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:36.805154 1881569 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 15:43:36.806336 1881569 start.go:305] selected driver: kvm2
	I1013 15:43:36.806354 1881569 start.go:925] validating driver "kvm2" against &{Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTi
meout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:43:36.806467 1881569 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 15:43:36.807212 1881569 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:43:36.807326 1881569 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:43:36.823011 1881569 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:43:36.823050 1881569 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:43:36.837875 1881569 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:43:36.838417 1881569 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 15:43:36.838458 1881569 cni.go:84] Creating CNI manager for ""
	I1013 15:43:36.838518 1881569 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:36.838573 1881569 start.go:349] cluster config:
	{Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400509 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:43:36.838736 1881569 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:43:36.841828 1881569 out.go:179] * Starting "newest-cni-400509" primary control-plane node in "newest-cni-400509" cluster
	I1013 15:43:35.461409 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | SSH cmd err, output: exit status 255: 
	I1013 15:43:35.461442 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1013 15:43:35.461456 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | command : exit 0
	I1013 15:43:35.461470 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | err     : exit status 255
	I1013 15:43:35.461482 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | output  : 
	I1013 15:43:38.463606 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Getting to WaitForSSH function...
	I1013 15:43:38.467055 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.467542 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.467571 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.467755 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Using SSH client type: external
	I1013 15:43:38.467781 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa (-rw-------)
	I1013 15:43:38.467825 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:43:38.467840 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | About to run SSH command:
	I1013 15:43:38.467903 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | exit 0
	I1013 15:43:36.843198 1881569 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:43:36.843293 1881569 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 15:43:36.843334 1881569 cache.go:58] Caching tarball of preloaded images
	I1013 15:43:36.843490 1881569 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 15:43:36.843509 1881569 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 15:43:36.843683 1881569 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/config.json ...
	I1013 15:43:36.843944 1881569 start.go:360] acquireMachinesLock for newest-cni-400509: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 15:43:39.632101 1881569 start.go:364] duration metric: took 2.788099128s to acquireMachinesLock for "newest-cni-400509"
	I1013 15:43:39.632152 1881569 start.go:96] Skipping create...Using existing machine configuration
	I1013 15:43:39.632159 1881569 fix.go:54] fixHost starting: 
	I1013 15:43:39.632598 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:39.632657 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:39.649454 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37131
	I1013 15:43:39.650005 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:39.650546 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:43:39.650575 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:39.651029 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:39.651238 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:39.651401 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:43:39.654204 1881569 fix.go:112] recreateIfNeeded on newest-cni-400509: state=Stopped err=<nil>
	I1013 15:43:39.654249 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	W1013 15:43:39.654457 1881569 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 15:43:39.656851 1881569 out.go:252] * Restarting existing kvm2 VM for "newest-cni-400509" ...
	I1013 15:43:39.656907 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Start
	I1013 15:43:39.657076 1881569 main.go:141] libmachine: (newest-cni-400509) starting domain...
	I1013 15:43:39.657101 1881569 main.go:141] libmachine: (newest-cni-400509) ensuring networks are active...
	I1013 15:43:39.657900 1881569 main.go:141] libmachine: (newest-cni-400509) Ensuring network default is active
	I1013 15:43:39.658431 1881569 main.go:141] libmachine: (newest-cni-400509) Ensuring network mk-newest-cni-400509 is active
	I1013 15:43:39.658999 1881569 main.go:141] libmachine: (newest-cni-400509) getting domain XML...
	I1013 15:43:39.660153 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | starting domain XML:
	I1013 15:43:39.660177 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | <domain type='kvm'>
	I1013 15:43:39.660215 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <name>newest-cni-400509</name>
	I1013 15:43:39.660260 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <uuid>27888586-a2e0-44db-a3c9-b78f39af9148</uuid>
	I1013 15:43:39.660278 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <memory unit='KiB'>3145728</memory>
	I1013 15:43:39.660290 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1013 15:43:39.660307 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 15:43:39.660324 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <os>
	I1013 15:43:39.660338 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 15:43:39.660350 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <boot dev='cdrom'/>
	I1013 15:43:39.660363 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <boot dev='hd'/>
	I1013 15:43:39.660374 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <bootmenu enable='no'/>
	I1013 15:43:39.660381 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   </os>
	I1013 15:43:39.660390 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <features>
	I1013 15:43:39.660431 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <acpi/>
	I1013 15:43:39.660458 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <apic/>
	I1013 15:43:39.660475 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <pae/>
	I1013 15:43:39.660482 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   </features>
	I1013 15:43:39.660495 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 15:43:39.660517 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <clock offset='utc'/>
	I1013 15:43:39.660527 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 15:43:39.660535 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <on_reboot>restart</on_reboot>
	I1013 15:43:39.660544 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <on_crash>destroy</on_crash>
	I1013 15:43:39.660554 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <devices>
	I1013 15:43:39.660565 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 15:43:39.660576 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <disk type='file' device='cdrom'>
	I1013 15:43:39.660585 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <driver name='qemu' type='raw'/>
	I1013 15:43:39.660601 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/boot2docker.iso'/>
	I1013 15:43:39.660614 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 15:43:39.660624 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <readonly/>
	I1013 15:43:39.660636 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 15:43:39.660645 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </disk>
	I1013 15:43:39.660655 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <disk type='file' device='disk'>
	I1013 15:43:39.660666 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 15:43:39.660683 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/newest-cni-400509.rawdisk'/>
	I1013 15:43:39.660701 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target dev='hda' bus='virtio'/>
	I1013 15:43:39.660725 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 15:43:39.660734 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </disk>
	I1013 15:43:39.660746 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 15:43:39.660766 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 15:43:39.660777 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </controller>
	I1013 15:43:39.660795 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 15:43:39.660809 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 15:43:39.660833 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 15:43:39.660845 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </controller>
	I1013 15:43:39.660852 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <interface type='network'>
	I1013 15:43:39.660865 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <mac address='52:54:00:a8:3a:80'/>
	I1013 15:43:39.660880 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source network='mk-newest-cni-400509'/>
	I1013 15:43:39.660909 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <model type='virtio'/>
	I1013 15:43:39.660934 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 15:43:39.660966 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </interface>
	I1013 15:43:39.660982 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <interface type='network'>
	I1013 15:43:39.660998 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <mac address='52:54:00:ee:bd:4a'/>
	I1013 15:43:39.661014 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source network='default'/>
	I1013 15:43:39.661026 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <model type='virtio'/>
	I1013 15:43:39.661044 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 15:43:39.661064 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </interface>
	I1013 15:43:39.661072 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <serial type='pty'>
	I1013 15:43:39.661080 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target type='isa-serial' port='0'>
	I1013 15:43:39.661093 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |         <model name='isa-serial'/>
	I1013 15:43:39.661105 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       </target>
	I1013 15:43:39.661112 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </serial>
	I1013 15:43:39.661125 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <console type='pty'>
	I1013 15:43:39.661132 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target type='serial' port='0'/>
	I1013 15:43:39.661139 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </console>
	I1013 15:43:39.661146 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <input type='mouse' bus='ps2'/>
	I1013 15:43:39.661173 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 15:43:39.661192 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <audio id='1' type='none'/>
	I1013 15:43:39.661213 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <memballoon model='virtio'>
	I1013 15:43:39.661263 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 15:43:39.661276 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </memballoon>
	I1013 15:43:39.661285 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <rng model='virtio'>
	I1013 15:43:39.661305 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <backend model='random'>/dev/random</backend>
	I1013 15:43:39.661325 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 15:43:39.661337 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </rng>
	I1013 15:43:39.661348 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   </devices>
	I1013 15:43:39.661357 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | </domain>
	I1013 15:43:39.661367 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | 
	I1013 15:43:40.126826 1881569 main.go:141] libmachine: (newest-cni-400509) waiting for domain to start...
	I1013 15:43:40.128784 1881569 main.go:141] libmachine: (newest-cni-400509) domain is now running
	I1013 15:43:40.128813 1881569 main.go:141] libmachine: (newest-cni-400509) waiting for IP...
	I1013 15:43:40.129922 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.130919 1881569 main.go:141] libmachine: (newest-cni-400509) found domain IP: 192.168.39.58
	I1013 15:43:40.130941 1881569 main.go:141] libmachine: (newest-cni-400509) reserving static IP address...
	I1013 15:43:40.130955 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has current primary IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.131624 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "newest-cni-400509", mac: "52:54:00:a8:3a:80", ip: "192.168.39.58"} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:42:58 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:40.131659 1881569 main.go:141] libmachine: (newest-cni-400509) reserved static IP address 192.168.39.58 for domain newest-cni-400509
	I1013 15:43:40.131687 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | skip adding static IP to network mk-newest-cni-400509 - found existing host DHCP lease matching {name: "newest-cni-400509", mac: "52:54:00:a8:3a:80", ip: "192.168.39.58"}
	I1013 15:43:40.131707 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Getting to WaitForSSH function...
	I1013 15:43:40.131747 1881569 main.go:141] libmachine: (newest-cni-400509) waiting for SSH...
	I1013 15:43:40.134418 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.134976 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:42:58 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:40.135005 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.135191 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH client type: external
	I1013 15:43:40.135247 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa (-rw-------)
	I1013 15:43:40.135291 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:43:40.135327 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | About to run SSH command:
	I1013 15:43:40.135339 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | exit 0
	I1013 15:43:38.610349 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | SSH cmd err, output: <nil>: 
	I1013 15:43:38.610819 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetConfigRaw
	I1013 15:43:38.611609 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:38.614998 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.615542 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.615574 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.615849 1881287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/config.json ...
	I1013 15:43:38.616089 1881287 machine.go:93] provisionDockerMachine start ...
	I1013 15:43:38.616107 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:38.616354 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:38.619808 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.620495 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.620528 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.620763 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:38.620947 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.621205 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.621440 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:38.621677 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:38.621969 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:38.621982 1881287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 15:43:38.741296 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1013 15:43:38.741340 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:43:38.741648 1881287 buildroot.go:166] provisioning hostname "default-k8s-diff-port-426789"
	I1013 15:43:38.741682 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:43:38.741931 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:38.745516 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.746082 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.746124 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.746340 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:38.746557 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.746778 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.746938 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:38.747114 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:38.747384 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:38.747401 1881287 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-426789 && echo "default-k8s-diff-port-426789" | sudo tee /etc/hostname
	I1013 15:43:38.883536 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-426789
	
	I1013 15:43:38.883566 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:38.886934 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.887401 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.887445 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.887640 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:38.887893 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.888084 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.888211 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:38.888374 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:38.888582 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:38.888599 1881287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-426789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-426789/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-426789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 15:43:39.017088 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 15:43:39.017119 1881287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 15:43:39.017144 1881287 buildroot.go:174] setting up certificates
	I1013 15:43:39.017158 1881287 provision.go:84] configureAuth start
	I1013 15:43:39.017194 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:43:39.017591 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:39.020991 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.021443 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.021466 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.021667 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.024308 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.024740 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.024775 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.025056 1881287 provision.go:143] copyHostCerts
	I1013 15:43:39.025124 1881287 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem, removing ...
	I1013 15:43:39.025142 1881287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem
	I1013 15:43:39.025243 1881287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 15:43:39.025421 1881287 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem, removing ...
	I1013 15:43:39.025436 1881287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem
	I1013 15:43:39.025483 1881287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 15:43:39.025608 1881287 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem, removing ...
	I1013 15:43:39.025622 1881287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem
	I1013 15:43:39.025662 1881287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 15:43:39.025772 1881287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-426789 san=[127.0.0.1 192.168.50.176 default-k8s-diff-port-426789 localhost minikube]
	I1013 15:43:39.142099 1881287 provision.go:177] copyRemoteCerts
	I1013 15:43:39.142168 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 15:43:39.142198 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.146110 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.146639 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.146665 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.146950 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.147180 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.147364 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.147518 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.238167 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 15:43:39.273616 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 15:43:39.314055 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 15:43:39.358579 1881287 provision.go:87] duration metric: took 341.404418ms to configureAuth
	I1013 15:43:39.358616 1881287 buildroot.go:189] setting minikube options for container-runtime
	I1013 15:43:39.358839 1881287 config.go:182] Loaded profile config "default-k8s-diff-port-426789": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:39.358854 1881287 machine.go:96] duration metric: took 742.756264ms to provisionDockerMachine
	I1013 15:43:39.358864 1881287 start.go:293] postStartSetup for "default-k8s-diff-port-426789" (driver="kvm2")
	I1013 15:43:39.358874 1881287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 15:43:39.358903 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.359307 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 15:43:39.359349 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.362558 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.362951 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.362982 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.363306 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.363546 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.363773 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.363949 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.454925 1881287 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 15:43:39.460515 1881287 info.go:137] Remote host: Buildroot 2025.02
	I1013 15:43:39.460550 1881287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 15:43:39.460650 1881287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 15:43:39.460784 1881287 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem -> 18149272.pem in /etc/ssl/certs
	I1013 15:43:39.460899 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 15:43:39.474542 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:43:39.506976 1881287 start.go:296] duration metric: took 148.091906ms for postStartSetup
	I1013 15:43:39.507038 1881287 fix.go:56] duration metric: took 15.862602997s for fixHost
	I1013 15:43:39.507067 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.510376 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.510803 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.510837 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.511112 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.511361 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.511540 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.511666 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.511848 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:39.512046 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:39.512057 1881287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 15:43:39.631899 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760370219.586411289
	
	I1013 15:43:39.631925 1881287 fix.go:216] guest clock: 1760370219.586411289
	I1013 15:43:39.631933 1881287 fix.go:229] Guest: 2025-10-13 15:43:39.586411289 +0000 UTC Remote: 2025-10-13 15:43:39.507044166 +0000 UTC m=+16.050668033 (delta=79.367123ms)
	I1013 15:43:39.631970 1881287 fix.go:200] guest clock delta is within tolerance: 79.367123ms
	I1013 15:43:39.631976 1881287 start.go:83] releasing machines lock for "default-k8s-diff-port-426789", held for 15.987562481s
	I1013 15:43:39.632004 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.632313 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:39.636049 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.636504 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.636554 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.636797 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.637455 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.637669 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.637818 1881287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 15:43:39.637878 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.637920 1881287 ssh_runner.go:195] Run: cat /version.json
	I1013 15:43:39.637952 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.641477 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.641517 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.641994 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.642042 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.642070 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.642087 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.642314 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.642327 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.642551 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.642554 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.642858 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.642902 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.643095 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.643095 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.734708 1881287 ssh_runner.go:195] Run: systemctl --version
	I1013 15:43:39.760037 1881287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 15:43:39.768523 1881287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 15:43:39.768671 1881287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 15:43:39.792919 1881287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 15:43:39.792950 1881287 start.go:495] detecting cgroup driver to use...
	I1013 15:43:39.793023 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 15:43:39.831232 1881287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 15:43:39.850993 1881287 docker.go:218] disabling cri-docker service (if available) ...
	I1013 15:43:39.851102 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 15:43:39.873826 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 15:43:39.896556 1881287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 15:43:40.064028 1881287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 15:43:40.305591 1881287 docker.go:234] disabling docker service ...
	I1013 15:43:40.305667 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 15:43:40.324329 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 15:43:40.340817 1881287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 15:43:40.541438 1881287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 15:43:40.704419 1881287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 15:43:40.723755 1881287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 15:43:40.752026 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 15:43:40.767452 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 15:43:40.782881 1881287 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 15:43:40.782958 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 15:43:40.798473 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:40.813327 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 15:43:40.828869 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:40.843772 1881287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 15:43:40.859620 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 15:43:40.876007 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 15:43:40.891780 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 15:43:40.907887 1881287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 15:43:40.919493 1881287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 15:43:40.919559 1881287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 15:43:40.950308 1881287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 15:43:40.968591 1881287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:41.139186 1881287 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 15:43:41.183301 1881287 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 15:43:41.183403 1881287 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:41.190223 1881287 retry.go:31] will retry after 1.16806029s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 15:43:42.358579 1881287 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:42.366926 1881287 start.go:563] Will wait 60s for crictl version
	I1013 15:43:42.367063 1881287 ssh_runner.go:195] Run: which crictl
	I1013 15:43:42.372655 1881287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 15:43:42.429723 1881287 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 15:43:42.429814 1881287 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:42.471739 1881287 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:42.509604 1881287 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 15:43:42.511075 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:42.514790 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:42.515349 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:42.515383 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:42.515708 1881287 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1013 15:43:42.520820 1881287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:43:42.537702 1881287 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 15:43:42.537834 1881287 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:43:42.537882 1881287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:42.577897 1881287 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:42.577934 1881287 containerd.go:534] Images already preloaded, skipping extraction
	I1013 15:43:42.578012 1881287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:42.626753 1881287 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:42.626790 1881287 cache_images.go:85] Images are preloaded, skipping loading
	I1013 15:43:42.626816 1881287 kubeadm.go:934] updating node { 192.168.50.176 8444 v1.34.1 containerd true true} ...
	I1013 15:43:42.626973 1881287 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-426789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 15:43:42.627112 1881287 ssh_runner.go:195] Run: sudo crictl info
	I1013 15:43:42.670994 1881287 cni.go:84] Creating CNI manager for ""
	I1013 15:43:42.671035 1881287 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:42.671067 1881287 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 15:43:42.671108 1881287 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.176 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-426789 NodeName:default-k8s-diff-port-426789 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 15:43:42.671296 1881287 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.176
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-426789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 15:43:42.671382 1881287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 15:43:42.685850 1881287 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 15:43:42.685938 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 15:43:42.702293 1881287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1013 15:43:42.726402 1881287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 15:43:42.754908 1881287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2247 bytes)
	I1013 15:43:42.782246 1881287 ssh_runner.go:195] Run: grep 192.168.50.176	control-plane.minikube.internal$ /etc/hosts
	I1013 15:43:42.788445 1881287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:43:42.806629 1881287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:42.987595 1881287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:43:43.027112 1881287 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789 for IP: 192.168.50.176
	I1013 15:43:43.027140 1881287 certs.go:195] generating shared ca certs ...
	I1013 15:43:43.027163 1881287 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:43.027383 1881287 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 15:43:43.027460 1881287 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 15:43:43.027483 1881287 certs.go:257] generating profile certs ...
	I1013 15:43:43.027635 1881287 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/client.key
	I1013 15:43:43.027760 1881287 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key.1e9a3db8
	I1013 15:43:43.027826 1881287 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.key
	I1013 15:43:43.027999 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem (1338 bytes)
	W1013 15:43:43.028050 1881287 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927_empty.pem, impossibly tiny 0 bytes
	I1013 15:43:43.028066 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 15:43:43.028098 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 15:43:43.028131 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 15:43:43.028163 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 15:43:43.028239 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:43:43.029002 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 15:43:43.082431 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 15:43:43.140436 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 15:43:43.210359 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 15:43:43.257226 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 15:43:43.298663 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 15:43:43.332285 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 15:43:43.369205 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 15:43:43.410586 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 15:43:43.451819 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem --> /usr/share/ca-certificates/1814927.pem (1338 bytes)
	I1013 15:43:43.486367 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /usr/share/ca-certificates/18149272.pem (1708 bytes)
	I1013 15:43:43.524801 1881287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 15:43:43.547937 1881287 ssh_runner.go:195] Run: openssl version
	I1013 15:43:43.555474 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 15:43:43.571070 1881287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:43:43.579175 1881287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:43:43.579263 1881287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:43:43.587603 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 15:43:43.604566 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1814927.pem && ln -fs /usr/share/ca-certificates/1814927.pem /etc/ssl/certs/1814927.pem"
	I1013 15:43:43.620309 1881287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1814927.pem
	I1013 15:43:43.626957 1881287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 14:22 /usr/share/ca-certificates/1814927.pem
	I1013 15:43:43.627045 1881287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1814927.pem
	I1013 15:43:43.635543 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1814927.pem /etc/ssl/certs/51391683.0"
	I1013 15:43:43.651153 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18149272.pem && ln -fs /usr/share/ca-certificates/18149272.pem /etc/ssl/certs/18149272.pem"
	I1013 15:43:43.666800 1881287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18149272.pem
	I1013 15:43:43.674478 1881287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 14:22 /usr/share/ca-certificates/18149272.pem
	I1013 15:43:43.674540 1881287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18149272.pem
	I1013 15:43:43.685525 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18149272.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 15:43:43.702224 1881287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 15:43:43.709862 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 15:43:43.720756 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 15:43:43.729444 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 15:43:43.737616 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 15:43:43.745934 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 15:43:43.754091 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 15:43:43.762115 1881287 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:43:43.762208 1881287 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 15:43:43.762293 1881287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:43:43.808267 1881287 cri.go:89] found id: "7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8"
	I1013 15:43:43.808301 1881287 cri.go:89] found id: "23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3"
	I1013 15:43:43.808306 1881287 cri.go:89] found id: "5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c"
	I1013 15:43:43.808312 1881287 cri.go:89] found id: "72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2"
	I1013 15:43:43.808316 1881287 cri.go:89] found id: "f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996"
	I1013 15:43:43.808322 1881287 cri.go:89] found id: "d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929"
	I1013 15:43:43.808327 1881287 cri.go:89] found id: "ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547"
	I1013 15:43:43.808338 1881287 cri.go:89] found id: ""
	I1013 15:43:43.808404 1881287 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1013 15:43:43.831377 1881287 kubeadm.go:407] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T15:43:43Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1013 15:43:43.831483 1881287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 15:43:43.845227 1881287 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 15:43:43.845260 1881287 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 15:43:43.845327 1881287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 15:43:43.863194 1881287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 15:43:43.864292 1881287 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-426789" does not appear in /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:43:43.864923 1881287 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-1810975/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-426789" cluster setting kubeconfig missing "default-k8s-diff-port-426789" context setting]
	I1013 15:43:43.865728 1881287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:43.867585 1881287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 15:43:43.883585 1881287 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.50.176
	I1013 15:43:43.883642 1881287 kubeadm.go:1160] stopping kube-system containers ...
	I1013 15:43:43.883662 1881287 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1013 15:43:43.883756 1881287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:43:43.948818 1881287 cri.go:89] found id: "7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8"
	I1013 15:43:43.948851 1881287 cri.go:89] found id: "23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3"
	I1013 15:43:43.948857 1881287 cri.go:89] found id: "5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c"
	I1013 15:43:43.948863 1881287 cri.go:89] found id: "72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2"
	I1013 15:43:43.948868 1881287 cri.go:89] found id: "f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996"
	I1013 15:43:43.948872 1881287 cri.go:89] found id: "d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929"
	I1013 15:43:43.948876 1881287 cri.go:89] found id: "ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547"
	I1013 15:43:43.948880 1881287 cri.go:89] found id: ""
	I1013 15:43:43.948890 1881287 cri.go:252] Stopping containers: [7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8 23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3 5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c 72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2 f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996 d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929 ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547]
	I1013 15:43:43.948976 1881287 ssh_runner.go:195] Run: which crictl
	I1013 15:43:43.955264 1881287 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8 23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3 5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c 72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2 f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996 d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929 ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547
	I1013 15:43:44.001390 1881287 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1013 15:43:44.022439 1881287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 15:43:44.035325 1881287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 15:43:44.035351 1881287 kubeadm.go:157] found existing configuration files:
	
	I1013 15:43:44.035411 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1013 15:43:44.047208 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 15:43:44.047292 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 15:43:44.060647 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1013 15:43:44.074202 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 15:43:44.074279 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 15:43:44.088532 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1013 15:43:44.103533 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 15:43:44.103601 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 15:43:44.122077 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1013 15:43:44.134937 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 15:43:44.135018 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 15:43:44.147842 1881287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 15:43:44.162447 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:44.318010 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:45.992643 1881287 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.674585761s)
	I1013 15:43:45.992768 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:46.260999 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:46.358031 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:46.484897 1881287 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:43:46.485026 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:46.986001 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:47.485965 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:47.985368 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:48.031141 1881287 api_server.go:72] duration metric: took 1.546261555s to wait for apiserver process to appear ...
	I1013 15:43:48.031174 1881287 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:43:48.031199 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:51.397143 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | SSH cmd err, output: exit status 255: 
	I1013 15:43:51.397186 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1013 15:43:51.397205 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | command : exit 0
	I1013 15:43:51.397214 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | err     : exit status 255
	I1013 15:43:51.397235 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | output  : 
	I1013 15:43:50.751338 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:43:50.751376 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:43:50.751412 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:50.842254 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:43:50.842294 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:43:51.031709 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:51.038850 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:43:51.038888 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:43:51.531498 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:51.540163 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:43:51.540193 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:43:52.031686 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:52.042465 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:43:52.042504 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:43:52.531913 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:52.538420 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 200:
	ok
	I1013 15:43:52.550202 1881287 api_server.go:141] control plane version: v1.34.1
	I1013 15:43:52.550246 1881287 api_server.go:131] duration metric: took 4.519061614s to wait for apiserver health ...
	I1013 15:43:52.550262 1881287 cni.go:84] Creating CNI manager for ""
	I1013 15:43:52.550273 1881287 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:52.552571 1881287 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 15:43:52.554067 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 15:43:52.574739 1881287 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 15:43:52.604706 1881287 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:43:52.613468 1881287 system_pods.go:59] 8 kube-system pods found
	I1013 15:43:52.613525 1881287 system_pods.go:61] "coredns-66bc5c9577-7mm74" [a6965960-a658-468c-a225-0a99e4ee6d29] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:43:52.613537 1881287 system_pods.go:61] "etcd-default-k8s-diff-port-426789" [97d29e80-2aae-46cb-b01c-2c94280cd2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:43:52.613547 1881287 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-426789" [b6f928ae-7bf8-48a8-b3df-251e2c47c935] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:43:52.613563 1881287 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-426789" [fffd4380-39d1-482a-a943-ac4ce7f67a82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:43:52.613576 1881287 system_pods.go:61] "kube-proxy-2vt8l" [1bae3750-c6df-46d8-8b33-130e1773600a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 15:43:52.613595 1881287 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-426789" [1cf8ece0-4fbc-4ab1-9ec8-d206af58f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:43:52.613609 1881287 system_pods.go:61] "metrics-server-746fcd58dc-mqvqg" [e7582897-ca82-4255-9bc3-8e9563b9e410] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:43:52.613617 1881287 system_pods.go:61] "storage-provisioner" [ff2ac22d-9091-4b0c-b7fd-0c2e3e7c0062] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 15:43:52.613628 1881287 system_pods.go:74] duration metric: took 8.879878ms to wait for pod list to return data ...
	I1013 15:43:52.613643 1881287 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:43:52.618132 1881287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:43:52.618175 1881287 node_conditions.go:123] node cpu capacity is 2
	I1013 15:43:52.618192 1881287 node_conditions.go:105] duration metric: took 4.543501ms to run NodePressure ...
	I1013 15:43:52.618275 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:53.069625 1881287 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1013 15:43:53.076322 1881287 kubeadm.go:743] kubelet initialised
	I1013 15:43:53.076353 1881287 kubeadm.go:744] duration metric: took 6.69335ms waiting for restarted kubelet to initialise ...
	I1013 15:43:53.076378 1881287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 15:43:53.108126 1881287 ops.go:34] apiserver oom_adj: -16
	I1013 15:43:53.108163 1881287 kubeadm.go:601] duration metric: took 9.262892964s to restartPrimaryControlPlane
	I1013 15:43:53.108181 1881287 kubeadm.go:402] duration metric: took 9.346075744s to StartCluster
	I1013 15:43:53.108210 1881287 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:53.108336 1881287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:43:53.110574 1881287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:53.111002 1881287 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:43:53.111137 1881287 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 15:43:53.111274 1881287 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111277 1881287 config.go:182] Loaded profile config "default-k8s-diff-port-426789": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:53.111300 1881287 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-426789"
	W1013 15:43:53.111313 1881287 addons.go:247] addon storage-provisioner should already be in state true
	I1013 15:43:53.111324 1881287 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111339 1881287 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111346 1881287 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-426789"
	I1013 15:43:53.111350 1881287 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-426789"
	W1013 15:43:53.111359 1881287 addons.go:247] addon dashboard should already be in state true
	W1013 15:43:53.111360 1881287 addons.go:247] addon metrics-server should already be in state true
	I1013 15:43:53.111379 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.111387 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.111402 1881287 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111347 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.111445 1881287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-426789"
	I1013 15:43:53.111808 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.111805 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.111835 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.111848 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.111868 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.111964 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.112184 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.112238 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.115926 1881287 out.go:179] * Verifying Kubernetes components...
	I1013 15:43:53.117837 1881287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:53.131021 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
	I1013 15:43:53.131145 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I1013 15:43:53.131263 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44749
	I1013 15:43:53.131306 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1013 15:43:53.131780 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.131963 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.132182 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.132306 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.132328 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.132489 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.132502 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.132656 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.132786 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.132818 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.132923 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.132945 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.133266 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.133335 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.133352 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.133493 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.133868 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.133922 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.134084 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.134115 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.134175 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.135005 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.135097 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.138473 1881287 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-426789"
	W1013 15:43:53.138535 1881287 addons.go:247] addon default-storageclass should already be in state true
	I1013 15:43:53.138571 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.138951 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.138996 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.153375 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I1013 15:43:53.154086 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I1013 15:43:53.154354 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.154973 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.155287 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.155384 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.155522 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.155588 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.155980 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.156055 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.156311 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.156695 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.159943 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.160580 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.161397 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I1013 15:43:53.161596 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I1013 15:43:53.162371 1881287 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 15:43:53.162442 1881287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 15:43:53.162491 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.162623 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.163108 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.163158 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.163241 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.163269 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.163621 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.163868 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.163948 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.164392 1881287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:43:53.164414 1881287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 15:43:53.164436 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.164610 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.164680 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.165704 1881287 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 15:43:53.167086 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 15:43:53.167111 1881287 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 15:43:53.167145 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.167519 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.169405 1881287 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1013 15:43:53.170806 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 15:43:53.170839 1881287 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 15:43:53.170868 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.170970 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.172904 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.172958 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.173486 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.174763 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.175298 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.175869 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:53.177546 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.178363 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.179072 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.179191 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.179380 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.179403 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.179451 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.179501 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.179539 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.179550 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.179763 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.179830 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.179923 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:53.180049 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:53.188031 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I1013 15:43:53.188746 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.189369 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.189391 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.189889 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.190124 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.192665 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.192993 1881287 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 15:43:53.193015 1881287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 15:43:53.193041 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.197517 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.198127 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.198171 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.198708 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.198952 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.199191 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.199425 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:54.398978 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Getting to WaitForSSH function...
	I1013 15:43:54.402868 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.403485 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.403522 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.403692 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH client type: external
	I1013 15:43:54.403735 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa (-rw-------)
	I1013 15:43:54.403786 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:43:54.403800 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | About to run SSH command:
	I1013 15:43:54.403823 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | exit 0
	I1013 15:43:54.544257 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | SSH cmd err, output: <nil>: 
	I1013 15:43:54.544730 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetConfigRaw
	I1013 15:43:54.545413 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:54.549394 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.550047 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.550090 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.550494 1881569 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/config.json ...
	I1013 15:43:54.550797 1881569 machine.go:93] provisionDockerMachine start ...
	I1013 15:43:54.550830 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:54.551132 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.554299 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.554707 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.554754 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.554943 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:54.555175 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.555424 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.555617 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:54.555946 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:54.556248 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:54.556260 1881569 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 15:43:54.688707 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1013 15:43:54.688778 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetMachineName
	I1013 15:43:54.689138 1881569 buildroot.go:166] provisioning hostname "newest-cni-400509"
	I1013 15:43:54.689168 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetMachineName
	I1013 15:43:54.689397 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.693596 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.694246 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.694300 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.694537 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:54.694811 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.695013 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.695198 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:54.695392 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:54.695702 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:54.695740 1881569 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-400509 && echo "newest-cni-400509" | sudo tee /etc/hostname
	I1013 15:43:54.834089 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400509
	
	I1013 15:43:54.834128 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.838142 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.838584 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.838632 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.839006 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:54.839287 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.839492 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.839694 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:54.840030 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:54.840291 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:54.840310 1881569 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-400509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-400509/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-400509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 15:43:54.976516 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 15:43:54.976554 1881569 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 15:43:54.976618 1881569 buildroot.go:174] setting up certificates
	I1013 15:43:54.976643 1881569 provision.go:84] configureAuth start
	I1013 15:43:54.976668 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetMachineName
	I1013 15:43:54.977165 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:54.981371 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.981937 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.981969 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.982449 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.986173 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.986658 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.986687 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.986975 1881569 provision.go:143] copyHostCerts
	I1013 15:43:54.987049 1881569 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem, removing ...
	I1013 15:43:54.987072 1881569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem
	I1013 15:43:54.987167 1881569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 15:43:54.987325 1881569 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem, removing ...
	I1013 15:43:54.987339 1881569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem
	I1013 15:43:54.987386 1881569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 15:43:54.987492 1881569 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem, removing ...
	I1013 15:43:54.987508 1881569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem
	I1013 15:43:54.987563 1881569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 15:43:54.987652 1881569 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.newest-cni-400509 san=[127.0.0.1 192.168.39.58 localhost minikube newest-cni-400509]
	I1013 15:43:56.105921 1881569 provision.go:177] copyRemoteCerts
	I1013 15:43:56.105986 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 15:43:56.106012 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.109883 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.110333 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.110378 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.110655 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.110940 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.111126 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.111313 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:56.204900 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 15:43:56.250950 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 15:43:56.289008 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 15:43:56.329429 1881569 provision.go:87] duration metric: took 1.352737429s to configureAuth
	I1013 15:43:56.329473 1881569 buildroot.go:189] setting minikube options for container-runtime
	I1013 15:43:56.329690 1881569 config.go:182] Loaded profile config "newest-cni-400509": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:56.329707 1881569 machine.go:96] duration metric: took 1.778889003s to provisionDockerMachine
	I1013 15:43:56.329732 1881569 start.go:293] postStartSetup for "newest-cni-400509" (driver="kvm2")
	I1013 15:43:56.329749 1881569 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 15:43:56.329787 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.330185 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 15:43:56.330228 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.334038 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.334514 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.334549 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.334786 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.335028 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.335223 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.335409 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:56.434835 1881569 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 15:43:56.440734 1881569 info.go:137] Remote host: Buildroot 2025.02
	I1013 15:43:56.440767 1881569 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 15:43:56.440835 1881569 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 15:43:56.440916 1881569 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem -> 18149272.pem in /etc/ssl/certs
	I1013 15:43:56.441040 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 15:43:56.459176 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:43:56.502925 1881569 start.go:296] duration metric: took 173.137045ms for postStartSetup
	I1013 15:43:56.502995 1881569 fix.go:56] duration metric: took 16.870835137s for fixHost
	I1013 15:43:56.503030 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.506452 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.506870 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.506935 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.507108 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.507367 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.507582 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.507785 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.508020 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:56.508247 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:56.508261 1881569 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 15:43:56.624915 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760370236.574388905
	
	I1013 15:43:56.624944 1881569 fix.go:216] guest clock: 1760370236.574388905
	I1013 15:43:56.624957 1881569 fix.go:229] Guest: 2025-10-13 15:43:56.574388905 +0000 UTC Remote: 2025-10-13 15:43:56.50300288 +0000 UTC m=+19.831043931 (delta=71.386025ms)
	I1013 15:43:56.625020 1881569 fix.go:200] guest clock delta is within tolerance: 71.386025ms
	I1013 15:43:56.625030 1881569 start.go:83] releasing machines lock for "newest-cni-400509", held for 16.992897063s
	I1013 15:43:56.625061 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.625392 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:56.628808 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.629195 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.629225 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.629541 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.630278 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.630480 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.630581 1881569 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 15:43:56.630650 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.630706 1881569 ssh_runner.go:195] Run: cat /version.json
	I1013 15:43:56.630755 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.635920 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.636466 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.636492 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.636511 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.636805 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.637052 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.637161 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.637177 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.637345 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.637508 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:56.637592 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.638223 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.638488 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.638658 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:53.506025 1881287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:43:53.552445 1881287 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-426789" to be "Ready" ...
	I1013 15:43:53.561765 1881287 node_ready.go:49] node "default-k8s-diff-port-426789" is "Ready"
	I1013 15:43:53.561797 1881287 node_ready.go:38] duration metric: took 9.308209ms for node "default-k8s-diff-port-426789" to be "Ready" ...
	I1013 15:43:53.561815 1881287 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:43:53.561875 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:53.620414 1881287 api_server.go:72] duration metric: took 509.358173ms to wait for apiserver process to appear ...
	I1013 15:43:53.620447 1881287 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:43:53.620471 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:53.648031 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 200:
	ok
	I1013 15:43:53.650864 1881287 api_server.go:141] control plane version: v1.34.1
	I1013 15:43:53.650897 1881287 api_server.go:131] duration metric: took 30.442085ms to wait for apiserver health ...
	I1013 15:43:53.650909 1881287 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:43:53.673424 1881287 system_pods.go:59] 8 kube-system pods found
	I1013 15:43:53.673472 1881287 system_pods.go:61] "coredns-66bc5c9577-7mm74" [a6965960-a658-468c-a225-0a99e4ee6d29] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:43:53.673485 1881287 system_pods.go:61] "etcd-default-k8s-diff-port-426789" [97d29e80-2aae-46cb-b01c-2c94280cd2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:43:53.673496 1881287 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-426789" [b6f928ae-7bf8-48a8-b3df-251e2c47c935] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:43:53.673507 1881287 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-426789" [fffd4380-39d1-482a-a943-ac4ce7f67a82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:43:53.673518 1881287 system_pods.go:61] "kube-proxy-2vt8l" [1bae3750-c6df-46d8-8b33-130e1773600a] Running
	I1013 15:43:53.673527 1881287 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-426789" [1cf8ece0-4fbc-4ab1-9ec8-d206af58f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:43:53.673540 1881287 system_pods.go:61] "metrics-server-746fcd58dc-mqvqg" [e7582897-ca82-4255-9bc3-8e9563b9e410] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:43:53.673549 1881287 system_pods.go:61] "storage-provisioner" [ff2ac22d-9091-4b0c-b7fd-0c2e3e7c0062] Running
	I1013 15:43:53.673559 1881287 system_pods.go:74] duration metric: took 22.641644ms to wait for pod list to return data ...
	I1013 15:43:53.673573 1881287 default_sa.go:34] waiting for default service account to be created ...
	I1013 15:43:53.685624 1881287 default_sa.go:45] found service account: "default"
	I1013 15:43:53.685669 1881287 default_sa.go:55] duration metric: took 12.081401ms for default service account to be created ...
	I1013 15:43:53.685695 1881287 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 15:43:53.703485 1881287 system_pods.go:86] 8 kube-system pods found
	I1013 15:43:53.703536 1881287 system_pods.go:89] "coredns-66bc5c9577-7mm74" [a6965960-a658-468c-a225-0a99e4ee6d29] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:43:53.703551 1881287 system_pods.go:89] "etcd-default-k8s-diff-port-426789" [97d29e80-2aae-46cb-b01c-2c94280cd2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:43:53.703563 1881287 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-426789" [b6f928ae-7bf8-48a8-b3df-251e2c47c935] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:43:53.703577 1881287 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-426789" [fffd4380-39d1-482a-a943-ac4ce7f67a82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:43:53.703585 1881287 system_pods.go:89] "kube-proxy-2vt8l" [1bae3750-c6df-46d8-8b33-130e1773600a] Running
	I1013 15:43:53.703592 1881287 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-426789" [1cf8ece0-4fbc-4ab1-9ec8-d206af58f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:43:53.703602 1881287 system_pods.go:89] "metrics-server-746fcd58dc-mqvqg" [e7582897-ca82-4255-9bc3-8e9563b9e410] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:43:53.703612 1881287 system_pods.go:89] "storage-provisioner" [ff2ac22d-9091-4b0c-b7fd-0c2e3e7c0062] Running
	I1013 15:43:53.703625 1881287 system_pods.go:126] duration metric: took 17.919545ms to wait for k8s-apps to be running ...
	I1013 15:43:53.703639 1881287 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 15:43:53.703708 1881287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 15:43:53.836388 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 15:43:53.847671 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:43:53.859317 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 15:43:53.859351 1881287 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 15:43:53.863118 1881287 system_svc.go:56] duration metric: took 159.468238ms WaitForService to wait for kubelet
	I1013 15:43:53.863156 1881287 kubeadm.go:586] duration metric: took 752.10936ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 15:43:53.863183 1881287 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:43:53.868102 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 15:43:53.868135 1881287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1013 15:43:53.876846 1881287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:43:53.876881 1881287 node_conditions.go:123] node cpu capacity is 2
	I1013 15:43:53.876895 1881287 node_conditions.go:105] duration metric: took 13.705749ms to run NodePressure ...
	I1013 15:43:53.876911 1881287 start.go:241] waiting for startup goroutines ...
	I1013 15:43:53.975801 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 15:43:53.975837 1881287 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 15:43:54.014372 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 15:43:54.014413 1881287 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 15:43:54.097966 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:43:54.098001 1881287 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 15:43:54.102029 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 15:43:54.102070 1881287 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 15:43:54.231798 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 15:43:54.231824 1881287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 15:43:54.279938 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:43:54.422682 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 15:43:54.422738 1881287 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 15:43:54.559022 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 15:43:54.559045 1881287 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 15:43:54.673642 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 15:43:54.673671 1881287 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 15:43:54.816125 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 15:43:54.816167 1881287 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 15:43:54.994488 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:43:54.994521 1881287 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 15:43:55.030337 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.193903867s)
	I1013 15:43:55.030400 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:55.030415 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:55.030809 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:55.030875 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:55.030890 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:55.030903 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:55.030915 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:55.031248 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:55.031256 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:55.031269 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:55.060389 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:55.060423 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:55.060934 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:55.060958 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:55.060959 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:55.140795 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:43:56.965227 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.117511004s)
	I1013 15:43:56.965299 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.965313 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.965682 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.965698 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:56.965701 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.965725 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.965735 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.966055 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.966089 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.982812 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.702823647s)
	I1013 15:43:56.982887 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.982902 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.983290 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.983313 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.983346 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.983354 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.983623 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.983642 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.983654 1881287 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-426789"
	I1013 15:43:57.358086 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.217241399s)
	I1013 15:43:57.358160 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:57.358174 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:57.358579 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:57.358599 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:57.358609 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:57.358631 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:57.358917 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:57.358932 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:57.358960 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:57.363260 1881287 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-426789 addons enable metrics-server
	
	I1013 15:43:57.365802 1881287 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1013 15:43:57.367317 1881287 addons.go:514] duration metric: took 4.256188456s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1013 15:43:57.367371 1881287 start.go:246] waiting for cluster config update ...
	I1013 15:43:57.367388 1881287 start.go:255] writing updated cluster config ...
	I1013 15:43:57.367791 1881287 ssh_runner.go:195] Run: rm -f paused
	I1013 15:43:57.378391 1881287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 15:43:57.391148 1881287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7mm74" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:43:56.747519 1881569 ssh_runner.go:195] Run: systemctl --version
	I1013 15:43:56.754883 1881569 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 15:43:56.762412 1881569 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 15:43:56.762502 1881569 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 15:43:56.786981 1881569 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 15:43:56.787012 1881569 start.go:495] detecting cgroup driver to use...
	I1013 15:43:56.787098 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 15:43:56.822198 1881569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 15:43:56.844111 1881569 docker.go:218] disabling cri-docker service (if available) ...
	I1013 15:43:56.844200 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 15:43:56.869650 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 15:43:56.890055 1881569 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 15:43:57.069567 1881569 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 15:43:57.320533 1881569 docker.go:234] disabling docker service ...
	I1013 15:43:57.320624 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 15:43:57.340325 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 15:43:57.358343 1881569 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 15:43:57.573206 1881569 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 15:43:57.752872 1881569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 15:43:57.778609 1881569 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 15:43:57.809437 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 15:43:57.825120 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 15:43:57.841470 1881569 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 15:43:57.841551 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 15:43:57.858777 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:57.874650 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 15:43:57.889338 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:57.905170 1881569 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 15:43:57.921541 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 15:43:57.937087 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 15:43:57.951733 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 15:43:57.967796 1881569 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 15:43:57.981546 1881569 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 15:43:57.981609 1881569 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 15:43:58.008790 1881569 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 15:43:58.024908 1881569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:58.218957 1881569 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 15:43:58.264961 1881569 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 15:43:58.265076 1881569 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:58.271878 1881569 retry.go:31] will retry after 1.359480351s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 15:43:59.632478 1881569 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:59.640017 1881569 start.go:563] Will wait 60s for crictl version
	I1013 15:43:59.640109 1881569 ssh_runner.go:195] Run: which crictl
	I1013 15:43:59.646533 1881569 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 15:43:59.704210 1881569 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 15:43:59.704321 1881569 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:59.745848 1881569 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:59.781571 1881569 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 15:43:59.783056 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:59.787259 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:59.787813 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:59.787850 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:59.788151 1881569 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 15:43:59.793319 1881569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:43:59.813808 1881569 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 15:43:59.815535 1881569 kubeadm.go:883] updating cluster {Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Sche
duledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 15:43:59.815759 1881569 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:43:59.815862 1881569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:59.858933 1881569 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:59.858960 1881569 containerd.go:534] Images already preloaded, skipping extraction
	I1013 15:43:59.859025 1881569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:59.900328 1881569 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:59.900362 1881569 cache_images.go:85] Images are preloaded, skipping loading
	I1013 15:43:59.900381 1881569 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.34.1 containerd true true} ...
	I1013 15:43:59.900516 1881569 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-400509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 15:43:59.900613 1881569 ssh_runner.go:195] Run: sudo crictl info
	I1013 15:43:59.950762 1881569 cni.go:84] Creating CNI manager for ""
	I1013 15:43:59.950793 1881569 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:59.950823 1881569 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 15:43:59.950864 1881569 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-400509 NodeName:newest-cni-400509 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 15:43:59.951043 1881569 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-400509"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 15:43:59.951135 1881569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 15:43:59.967876 1881569 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 15:43:59.967956 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 15:43:59.982916 1881569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1013 15:44:00.010237 1881569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 15:44:00.040144 1881569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1013 15:44:00.066386 1881569 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1013 15:44:00.071339 1881569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:44:00.090025 1881569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:44:00.252566 1881569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:44:00.303616 1881569 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509 for IP: 192.168.39.58
	I1013 15:44:00.303643 1881569 certs.go:195] generating shared ca certs ...
	I1013 15:44:00.303666 1881569 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:00.303875 1881569 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 15:44:00.303956 1881569 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 15:44:00.303979 1881569 certs.go:257] generating profile certs ...
	I1013 15:44:00.304150 1881569 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/client.key
	I1013 15:44:00.304227 1881569 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/apiserver.key.832cd03a
	I1013 15:44:00.304286 1881569 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/proxy-client.key
	I1013 15:44:00.304458 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem (1338 bytes)
	W1013 15:44:00.304508 1881569 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927_empty.pem, impossibly tiny 0 bytes
	I1013 15:44:00.304522 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 15:44:00.304562 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 15:44:00.304594 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 15:44:00.304628 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 15:44:00.304681 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:44:00.305582 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 15:44:00.349695 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 15:44:00.394423 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 15:44:00.453420 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 15:44:00.500378 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 15:44:00.553138 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 15:44:00.590334 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 15:44:00.630023 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 15:44:00.668829 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 15:44:00.712223 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem --> /usr/share/ca-certificates/1814927.pem (1338 bytes)
	I1013 15:44:00.752915 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /usr/share/ca-certificates/18149272.pem (1708 bytes)
	I1013 15:44:00.789877 1881569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 15:44:00.813337 1881569 ssh_runner.go:195] Run: openssl version
	I1013 15:44:00.821230 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1814927.pem && ln -fs /usr/share/ca-certificates/1814927.pem /etc/ssl/certs/1814927.pem"
	I1013 15:44:00.837532 1881569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1814927.pem
	I1013 15:44:00.843842 1881569 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 14:22 /usr/share/ca-certificates/1814927.pem
	I1013 15:44:00.843915 1881569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1814927.pem
	I1013 15:44:00.852403 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1814927.pem /etc/ssl/certs/51391683.0"
	I1013 15:44:00.868962 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18149272.pem && ln -fs /usr/share/ca-certificates/18149272.pem /etc/ssl/certs/18149272.pem"
	I1013 15:44:00.887762 1881569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18149272.pem
	I1013 15:44:00.895478 1881569 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 14:22 /usr/share/ca-certificates/18149272.pem
	I1013 15:44:00.895571 1881569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18149272.pem
	I1013 15:44:00.904610 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18149272.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 15:44:00.921509 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 15:44:00.940954 1881569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:44:00.947541 1881569 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:44:00.947630 1881569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:44:00.956030 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 15:44:00.974527 1881569 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 15:44:00.981332 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 15:44:00.992960 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 15:44:01.004003 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 15:44:01.012671 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 15:44:01.020681 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 15:44:01.028927 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 15:44:01.037647 1881569 kubeadm.go:400] StartCluster: {Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:44:01.037778 1881569 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 15:44:01.037843 1881569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:44:01.097948 1881569 cri.go:89] found id: "1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554"
	I1013 15:44:01.097981 1881569 cri.go:89] found id: "36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675"
	I1013 15:44:01.097988 1881569 cri.go:89] found id: "95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb"
	I1013 15:44:01.097993 1881569 cri.go:89] found id: "2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5"
	I1013 15:44:01.097997 1881569 cri.go:89] found id: "2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1"
	I1013 15:44:01.098002 1881569 cri.go:89] found id: "a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4"
	I1013 15:44:01.098006 1881569 cri.go:89] found id: "94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab"
	I1013 15:44:01.098010 1881569 cri.go:89] found id: "590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c"
	I1013 15:44:01.098014 1881569 cri.go:89] found id: ""
	I1013 15:44:01.098075 1881569 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1013 15:44:01.122443 1881569 kubeadm.go:407] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T15:44:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1013 15:44:01.122587 1881569 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 15:44:01.144393 1881569 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 15:44:01.144424 1881569 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 15:44:01.144489 1881569 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 15:44:01.159059 1881569 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 15:44:01.160097 1881569 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-400509" does not appear in /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:44:01.160849 1881569 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-1810975/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-400509" cluster setting kubeconfig missing "newest-cni-400509" context setting]
	I1013 15:44:01.162117 1881569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:01.164324 1881569 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 15:44:01.182868 1881569 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.58
	I1013 15:44:01.182912 1881569 kubeadm.go:1160] stopping kube-system containers ...
	I1013 15:44:01.182929 1881569 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1013 15:44:01.183008 1881569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:44:01.236181 1881569 cri.go:89] found id: "1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554"
	I1013 15:44:01.236210 1881569 cri.go:89] found id: "36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675"
	I1013 15:44:01.236217 1881569 cri.go:89] found id: "95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb"
	I1013 15:44:01.236223 1881569 cri.go:89] found id: "2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5"
	I1013 15:44:01.236228 1881569 cri.go:89] found id: "2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1"
	I1013 15:44:01.236233 1881569 cri.go:89] found id: "a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4"
	I1013 15:44:01.236237 1881569 cri.go:89] found id: "94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab"
	I1013 15:44:01.236241 1881569 cri.go:89] found id: "590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c"
	I1013 15:44:01.236245 1881569 cri.go:89] found id: ""
	I1013 15:44:01.236272 1881569 cri.go:252] Stopping containers: [1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554 36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675 95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb 2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5 2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1 a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4 94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab 590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c]
	I1013 15:44:01.236375 1881569 ssh_runner.go:195] Run: which crictl
	I1013 15:44:01.241802 1881569 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554 36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675 95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb 2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5 2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1 a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4 94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab 590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c
	I1013 15:44:01.290389 1881569 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1013 15:44:01.314882 1881569 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 15:44:01.329255 1881569 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 15:44:01.329305 1881569 kubeadm.go:157] found existing configuration files:
	
	I1013 15:44:01.329373 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 15:44:01.341956 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 15:44:01.342028 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 15:44:01.355841 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 15:44:01.368810 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 15:44:01.368903 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 15:44:01.382268 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 15:44:01.396472 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 15:44:01.396552 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 15:44:01.412562 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 15:44:01.426123 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 15:44:01.426188 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 15:44:01.442585 1881569 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 15:44:01.460493 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:01.611108 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	W1013 15:43:59.400593 1881287 pod_ready.go:104] pod "coredns-66bc5c9577-7mm74" is not "Ready", error: <nil>
	W1013 15:44:01.404013 1881287 pod_ready.go:104] pod "coredns-66bc5c9577-7mm74" is not "Ready", error: <nil>
	I1013 15:44:02.909951 1881287 pod_ready.go:94] pod "coredns-66bc5c9577-7mm74" is "Ready"
	I1013 15:44:02.909990 1881287 pod_ready.go:86] duration metric: took 5.518800662s for pod "coredns-66bc5c9577-7mm74" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:02.913489 1881287 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:02.919647 1881287 pod_ready.go:94] pod "etcd-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:02.919678 1881287 pod_ready.go:86] duration metric: took 6.161871ms for pod "etcd-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:02.928092 1881287 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.438075 1881287 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:04.438113 1881287 pod_ready.go:86] duration metric: took 1.509988538s for pod "kube-apiserver-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.442872 1881287 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.451602 1881287 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:04.451645 1881287 pod_ready.go:86] duration metric: took 8.73711ms for pod "kube-controller-manager-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.497031 1881287 pod_ready.go:83] waiting for pod "kube-proxy-2vt8l" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.897578 1881287 pod_ready.go:94] pod "kube-proxy-2vt8l" is "Ready"
	I1013 15:44:04.897618 1881287 pod_ready.go:86] duration metric: took 400.546183ms for pod "kube-proxy-2vt8l" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:05.096440 1881287 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:05.496577 1881287 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:05.496616 1881287 pod_ready.go:86] duration metric: took 400.135912ms for pod "kube-scheduler-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:05.496664 1881287 pod_ready.go:40] duration metric: took 8.118190331s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 15:44:05.552871 1881287 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 15:44:05.554860 1881287 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-426789" cluster and "default" namespace by default
	I1013 15:44:02.860183 1881569 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.249017124s)
	I1013 15:44:02.860277 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:03.168409 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:03.257048 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:03.348980 1881569 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:44:03.349102 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:03.849619 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:04.350010 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:04.849274 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:04.888091 1881569 api_server.go:72] duration metric: took 1.539128472s to wait for apiserver process to appear ...
	I1013 15:44:04.888128 1881569 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:44:04.888157 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:04.888817 1881569 api_server.go:269] stopped: https://192.168.39.58:8443/healthz: Get "https://192.168.39.58:8443/healthz": dial tcp 192.168.39.58:8443: connect: connection refused
	I1013 15:44:05.388397 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:07.970700 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:44:07.970755 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:44:07.970773 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:08.014873 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:44:08.014906 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:44:08.388242 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:08.394684 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:08.394733 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:08.888394 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:08.898015 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:08.898049 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:09.388508 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:09.394367 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:09.394400 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:09.888304 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:09.895427 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:09.895462 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:10.389244 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:10.396050 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1013 15:44:10.404568 1881569 api_server.go:141] control plane version: v1.34.1
	I1013 15:44:10.404611 1881569 api_server.go:131] duration metric: took 5.516473663s to wait for apiserver health ...
	I1013 15:44:10.404626 1881569 cni.go:84] Creating CNI manager for ""
	I1013 15:44:10.404634 1881569 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:44:10.406752 1881569 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 15:44:10.408371 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 15:44:10.423786 1881569 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 15:44:10.455726 1881569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:44:10.462697 1881569 system_pods.go:59] 9 kube-system pods found
	I1013 15:44:10.462753 1881569 system_pods.go:61] "coredns-66bc5c9577-bjq5v" [91a9af9a-e41a-4318-81d9-f7d51fe95004] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:10.462769 1881569 system_pods.go:61] "coredns-66bc5c9577-mbvz8" [3bd6fcbc-f1cd-4996-9cc5-af429ec54d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:10.462780 1881569 system_pods.go:61] "etcd-newest-cni-400509" [ea2910a6-f7b1-41c0-89b2-be41f742a959] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:44:10.462790 1881569 system_pods.go:61] "kube-apiserver-newest-cni-400509" [1837ba3d-de07-4dd0-9cb3-0ad36c5da82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:44:10.462802 1881569 system_pods.go:61] "kube-controller-manager-newest-cni-400509" [b38e0595-92d4-4723-a550-02b3567fa410] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:44:10.462808 1881569 system_pods.go:61] "kube-proxy-w5j92" [f2b6880d-90c5-484d-84cc-6f657d38179d] Running
	I1013 15:44:10.462815 1881569 system_pods.go:61] "kube-scheduler-newest-cni-400509" [f55dcdac-6629-48f5-ab8b-fff90f5196aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:44:10.462842 1881569 system_pods.go:61] "metrics-server-746fcd58dc-nnvx9" [836f9d73-0cde-4dea-9bff-f6ac345cadc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:44:10.462847 1881569 system_pods.go:61] "storage-provisioner" [6557f44c-4238-4b21-b5e5-2ef2cb2c554c] Running
	I1013 15:44:10.462855 1881569 system_pods.go:74] duration metric: took 7.102704ms to wait for pod list to return data ...
	I1013 15:44:10.462869 1881569 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:44:10.467505 1881569 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:44:10.467542 1881569 node_conditions.go:123] node cpu capacity is 2
	I1013 15:44:10.467556 1881569 node_conditions.go:105] duration metric: took 4.682317ms to run NodePressure ...
	I1013 15:44:10.467610 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:10.762255 1881569 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 15:44:10.780389 1881569 ops.go:34] apiserver oom_adj: -16
	I1013 15:44:10.780421 1881569 kubeadm.go:601] duration metric: took 9.635988482s to restartPrimaryControlPlane
	I1013 15:44:10.780437 1881569 kubeadm.go:402] duration metric: took 9.742806388s to StartCluster
	I1013 15:44:10.780475 1881569 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:10.780589 1881569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:44:10.782504 1881569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:10.782808 1881569 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:44:10.782888 1881569 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 15:44:10.783000 1881569 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-400509"
	I1013 15:44:10.783025 1881569 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-400509"
	W1013 15:44:10.783033 1881569 addons.go:247] addon storage-provisioner should already be in state true
	I1013 15:44:10.783032 1881569 addons.go:69] Setting default-storageclass=true in profile "newest-cni-400509"
	I1013 15:44:10.783057 1881569 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-400509"
	I1013 15:44:10.783065 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	I1013 15:44:10.783066 1881569 addons.go:69] Setting metrics-server=true in profile "newest-cni-400509"
	I1013 15:44:10.783090 1881569 addons.go:69] Setting dashboard=true in profile "newest-cni-400509"
	I1013 15:44:10.783117 1881569 addons.go:238] Setting addon metrics-server=true in "newest-cni-400509"
	I1013 15:44:10.783123 1881569 addons.go:238] Setting addon dashboard=true in "newest-cni-400509"
	W1013 15:44:10.783132 1881569 addons.go:247] addon dashboard should already be in state true
	I1013 15:44:10.783147 1881569 config.go:182] Loaded profile config "newest-cni-400509": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:44:10.783174 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	W1013 15:44:10.783132 1881569 addons.go:247] addon metrics-server should already be in state true
	I1013 15:44:10.783246 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	I1013 15:44:10.783508 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783559 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.783583 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783505 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783614 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.783640 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.783648 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783670 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.784368 1881569 out.go:179] * Verifying Kubernetes components...
	I1013 15:44:10.785756 1881569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:44:10.800271 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37791
	I1013 15:44:10.800271 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I1013 15:44:10.801032 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.801109 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I1013 15:44:10.801246 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.801506 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.801929 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.801955 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.802056 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.802082 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.802110 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I1013 15:44:10.802430 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.802455 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.802480 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.802460 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.802674 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.803138 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.803158 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.803208 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.803230 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.803443 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.803454 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.803467 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.803920 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.804033 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.804083 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.804124 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.812531 1881569 addons.go:238] Setting addon default-storageclass=true in "newest-cni-400509"
	W1013 15:44:10.812560 1881569 addons.go:247] addon default-storageclass should already be in state true
	I1013 15:44:10.812594 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	I1013 15:44:10.812997 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.813066 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.820690 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I1013 15:44:10.821988 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.822645 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.822687 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.823210 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.823487 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.827289 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.829099 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1013 15:44:10.829669 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.829812 1881569 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 15:44:10.830088 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I1013 15:44:10.830259 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.830280 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.830669 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.830818 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.830868 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.831364 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.831385 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.832151 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I1013 15:44:10.832239 1881569 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 15:44:10.832197 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.832793 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.832793 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.833231 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.833272 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.833297 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.833471 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 15:44:10.833488 1881569 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 15:44:10.833508 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.833970 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.834643 1881569 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1013 15:44:10.834786 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.834839 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.835807 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.837731 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.838271 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.838321 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.838595 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.838792 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.838994 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.839128 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:10.839520 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 15:44:10.839547 1881569 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 15:44:10.839574 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.840359 1881569 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 15:44:10.841784 1881569 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:44:10.841804 1881569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 15:44:10.841825 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.844531 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.845501 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.845570 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.845952 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.846206 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.846484 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.846861 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:10.847137 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.847628 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.847850 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.848261 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.848469 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.848657 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.848992 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:10.853772 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I1013 15:44:10.854204 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.854681 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.854698 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.855059 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.855327 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.857412 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.857679 1881569 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 15:44:10.857694 1881569 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 15:44:10.857728 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.861587 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.861994 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.862021 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.862318 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.862498 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.862640 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.862796 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:11.065604 1881569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:44:11.089626 1881569 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:44:11.089733 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:11.110889 1881569 api_server.go:72] duration metric: took 328.043615ms to wait for apiserver process to appear ...
	I1013 15:44:11.110921 1881569 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:44:11.110945 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:11.116791 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1013 15:44:11.117887 1881569 api_server.go:141] control plane version: v1.34.1
	I1013 15:44:11.117919 1881569 api_server.go:131] duration metric: took 6.988921ms to wait for apiserver health ...
	I1013 15:44:11.117931 1881569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:44:11.127122 1881569 system_pods.go:59] 9 kube-system pods found
	I1013 15:44:11.127169 1881569 system_pods.go:61] "coredns-66bc5c9577-bjq5v" [91a9af9a-e41a-4318-81d9-f7d51fe95004] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:11.127186 1881569 system_pods.go:61] "coredns-66bc5c9577-mbvz8" [3bd6fcbc-f1cd-4996-9cc5-af429ec54d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:11.127195 1881569 system_pods.go:61] "etcd-newest-cni-400509" [ea2910a6-f7b1-41c0-89b2-be41f742a959] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:44:11.127208 1881569 system_pods.go:61] "kube-apiserver-newest-cni-400509" [1837ba3d-de07-4dd0-9cb3-0ad36c5da82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:44:11.127214 1881569 system_pods.go:61] "kube-controller-manager-newest-cni-400509" [b38e0595-92d4-4723-a550-02b3567fa410] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:44:11.127218 1881569 system_pods.go:61] "kube-proxy-w5j92" [f2b6880d-90c5-484d-84cc-6f657d38179d] Running
	I1013 15:44:11.127223 1881569 system_pods.go:61] "kube-scheduler-newest-cni-400509" [f55dcdac-6629-48f5-ab8b-fff90f5196aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:44:11.127228 1881569 system_pods.go:61] "metrics-server-746fcd58dc-nnvx9" [836f9d73-0cde-4dea-9bff-f6ac345cadc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:44:11.127231 1881569 system_pods.go:61] "storage-provisioner" [6557f44c-4238-4b21-b5e5-2ef2cb2c554c] Running
	I1013 15:44:11.127241 1881569 system_pods.go:74] duration metric: took 9.299922ms to wait for pod list to return data ...
	I1013 15:44:11.127267 1881569 default_sa.go:34] waiting for default service account to be created ...
	I1013 15:44:11.131642 1881569 default_sa.go:45] found service account: "default"
	I1013 15:44:11.131672 1881569 default_sa.go:55] duration metric: took 4.396286ms for default service account to be created ...
	I1013 15:44:11.131689 1881569 kubeadm.go:586] duration metric: took 348.849317ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 15:44:11.131723 1881569 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:44:11.135748 1881569 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:44:11.135781 1881569 node_conditions.go:123] node cpu capacity is 2
	I1013 15:44:11.135795 1881569 node_conditions.go:105] duration metric: took 4.065136ms to run NodePressure ...
	I1013 15:44:11.135809 1881569 start.go:241] waiting for startup goroutines ...
	I1013 15:44:11.297679 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 15:44:11.297704 1881569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1013 15:44:11.302366 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 15:44:11.302395 1881569 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 15:44:11.328126 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:44:11.336312 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 15:44:11.390077 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 15:44:11.390113 1881569 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 15:44:11.401349 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 15:44:11.401380 1881569 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 15:44:11.487081 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 15:44:11.487113 1881569 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 15:44:11.514896 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:44:11.514927 1881569 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 15:44:11.548697 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 15:44:11.548735 1881569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 15:44:11.576084 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:44:11.638992 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 15:44:11.639025 1881569 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 15:44:11.739144 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 15:44:11.739177 1881569 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 15:44:11.851415 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 15:44:11.851451 1881569 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 15:44:11.964190 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 15:44:11.964227 1881569 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 15:44:12.151581 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:44:12.151616 1881569 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 15:44:12.348324 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:44:14.548429 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.212077572s)
	I1013 15:44:14.548509 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.548523 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.548612 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.22045241s)
	I1013 15:44:14.548643 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.548655 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.548889 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.548910 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.548922 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.548931 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.549013 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.549064 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.549083 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.549102 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.549113 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.549247 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.549260 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.549515 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.549546 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.549552 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.590958 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.590989 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.591387 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.591401 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.591419 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.690046 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.113908538s)
	I1013 15:44:14.690105 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.690120 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.690573 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.690605 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.690622 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.690634 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.690650 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.690904 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.690936 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.690957 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.690981 1881569 addons.go:479] Verifying addon metrics-server=true in "newest-cni-400509"
	I1013 15:44:15.069622 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.721227304s)
	I1013 15:44:15.069689 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:15.069705 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:15.070241 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:15.070270 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:15.070282 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:15.070295 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:15.070301 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:15.070572 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:15.070587 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:15.074390 1881569 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-400509 addons enable metrics-server
	
	I1013 15:44:15.076426 1881569 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1013 15:44:15.077979 1881569 addons.go:514] duration metric: took 4.295084518s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1013 15:44:15.078038 1881569 start.go:246] waiting for cluster config update ...
	I1013 15:44:15.078071 1881569 start.go:255] writing updated cluster config ...
	I1013 15:44:15.078443 1881569 ssh_runner.go:195] Run: rm -f paused
	I1013 15:44:15.144611 1881569 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 15:44:15.146748 1881569 out.go:179] * Done! kubectl is now configured to use "newest-cni-400509" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	42257f6a74732       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   8                   ef84cd84287c6       dashboard-metrics-scraper-6ffb444bf9-fbbs2
	68b3fdbaad74b       6e38f40d628db       17 minutes ago       Running             storage-provisioner         2                   8fcf5a5038548       storage-provisioner
	fff2931577732       56cc512116c8f       18 minutes ago       Running             busybox                     1                   79c5a3c348fd5       busybox
	9be646d88f3f4       52546a367cc9e       18 minutes ago       Running             coredns                     1                   59f8d1b6eff12       coredns-66bc5c9577-vfqml
	c8d68c0b5b004       6e38f40d628db       18 minutes ago       Exited              storage-provisioner         1                   8fcf5a5038548       storage-provisioner
	8a31e63284253       fc25172553d79       18 minutes ago       Running             kube-proxy                  1                   fd61ae777eb69       kube-proxy-v8ndx
	5e5dd356ff2ec       c3994bc696102       18 minutes ago       Running             kube-apiserver              1                   dee68c3ae6b6a       kube-apiserver-no-preload-673307
	c10ad89ae3abb       5f1f5298c888d       18 minutes ago       Running             etcd                        1                   1c280b78a8f1b       etcd-no-preload-673307
	84ce421cd0a89       7dd6aaa1717ab       18 minutes ago       Running             kube-scheduler              1                   1c083b14c19d7       kube-scheduler-no-preload-673307
	2709cff04f5c8       c80c8dbafe7dd       18 minutes ago       Running             kube-controller-manager     1                   f202c9072274e       kube-controller-manager-no-preload-673307
	9484313d54631       56cc512116c8f       20 minutes ago       Exited              busybox                     0                   d6fcae4e1d4d5       busybox
	dca47b48c82d3       52546a367cc9e       20 minutes ago       Exited              coredns                     0                   7e0f99084df2e       coredns-66bc5c9577-vfqml
	22670bd9ab094       fc25172553d79       20 minutes ago       Exited              kube-proxy                  0                   a961c0d8c2594       kube-proxy-v8ndx
	c049868803b14       5f1f5298c888d       21 minutes ago       Exited              etcd                        0                   323ab2d53b64a       etcd-no-preload-673307
	97b7ebc7f552a       c3994bc696102       21 minutes ago       Exited              kube-apiserver              0                   092bd0f706ede       kube-apiserver-no-preload-673307
	668e85e990be5       7dd6aaa1717ab       21 minutes ago       Exited              kube-scheduler              0                   c1ff9a19d8382       kube-scheduler-no-preload-673307
	b87b6ea0d2c9d       c80c8dbafe7dd       21 minutes ago       Exited              kube-controller-manager     0                   4a5ffd2a57b04       kube-controller-manager-no-preload-673307
	
	
	==> containerd <==
	Oct 13 15:42:55 no-preload-673307 containerd[721]: time="2025-10-13T15:42:55.471218304Z" level=info msg="StartContainer for \"28ad05991e11dc2b20890e134e8ce8d5861f815e3e1559dacddd7098c357b32d\""
	Oct 13 15:42:55 no-preload-673307 containerd[721]: time="2025-10-13T15:42:55.558919388Z" level=info msg="StartContainer for \"28ad05991e11dc2b20890e134e8ce8d5861f815e3e1559dacddd7098c357b32d\" returns successfully"
	Oct 13 15:42:55 no-preload-673307 containerd[721]: time="2025-10-13T15:42:55.608371017Z" level=info msg="shim disconnected" id=28ad05991e11dc2b20890e134e8ce8d5861f815e3e1559dacddd7098c357b32d namespace=k8s.io
	Oct 13 15:42:55 no-preload-673307 containerd[721]: time="2025-10-13T15:42:55.608543450Z" level=warning msg="cleaning up after shim disconnected" id=28ad05991e11dc2b20890e134e8ce8d5861f815e3e1559dacddd7098c357b32d namespace=k8s.io
	Oct 13 15:42:55 no-preload-673307 containerd[721]: time="2025-10-13T15:42:55.608563827Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:42:56 no-preload-673307 containerd[721]: time="2025-10-13T15:42:56.296237587Z" level=info msg="RemoveContainer for \"e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64\""
	Oct 13 15:42:56 no-preload-673307 containerd[721]: time="2025-10-13T15:42:56.305081727Z" level=info msg="RemoveContainer for \"e19bdab9211abb8e318b6dca1c7f763b3600f39e201de12b26fa6ab488208c64\" returns successfully"
	Oct 13 15:47:25 no-preload-673307 containerd[721]: time="2025-10-13T15:47:25.432312871Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 13 15:47:25 no-preload-673307 containerd[721]: time="2025-10-13T15:47:25.439354594Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Oct 13 15:47:25 no-preload-673307 containerd[721]: time="2025-10-13T15:47:25.441683476Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Oct 13 15:47:25 no-preload-673307 containerd[721]: time="2025-10-13T15:47:25.442651589Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 13 15:47:50 no-preload-673307 containerd[721]: time="2025-10-13T15:47:50.428646976Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 15:47:50 no-preload-673307 containerd[721]: time="2025-10-13T15:47:50.432014822Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:47:50 no-preload-673307 containerd[721]: time="2025-10-13T15:47:50.512687058Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:47:50 no-preload-673307 containerd[721]: time="2025-10-13T15:47:50.683302248Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 15:47:50 no-preload-673307 containerd[721]: time="2025-10-13T15:47:50.683567538Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
	Oct 13 15:48:03 no-preload-673307 containerd[721]: time="2025-10-13T15:48:03.432872916Z" level=info msg="CreateContainer within sandbox \"ef84cd84287c6ac9da0e101f12c902d29bd6a2a1bb40086aef634b428a605eb8\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Oct 13 15:48:03 no-preload-673307 containerd[721]: time="2025-10-13T15:48:03.459925026Z" level=info msg="CreateContainer within sandbox \"ef84cd84287c6ac9da0e101f12c902d29bd6a2a1bb40086aef634b428a605eb8\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5\""
	Oct 13 15:48:03 no-preload-673307 containerd[721]: time="2025-10-13T15:48:03.460797242Z" level=info msg="StartContainer for \"42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5\""
	Oct 13 15:48:03 no-preload-673307 containerd[721]: time="2025-10-13T15:48:03.546938173Z" level=info msg="StartContainer for \"42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5\" returns successfully"
	Oct 13 15:48:03 no-preload-673307 containerd[721]: time="2025-10-13T15:48:03.605917040Z" level=info msg="shim disconnected" id=42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5 namespace=k8s.io
	Oct 13 15:48:03 no-preload-673307 containerd[721]: time="2025-10-13T15:48:03.606067253Z" level=warning msg="cleaning up after shim disconnected" id=42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5 namespace=k8s.io
	Oct 13 15:48:03 no-preload-673307 containerd[721]: time="2025-10-13T15:48:03.606094449Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:48:04 no-preload-673307 containerd[721]: time="2025-10-13T15:48:04.346076964Z" level=info msg="RemoveContainer for \"28ad05991e11dc2b20890e134e8ce8d5861f815e3e1559dacddd7098c357b32d\""
	Oct 13 15:48:04 no-preload-673307 containerd[721]: time="2025-10-13T15:48:04.356086438Z" level=info msg="RemoveContainer for \"28ad05991e11dc2b20890e134e8ce8d5861f815e3e1559dacddd7098c357b32d\" returns successfully"
	
	
	==> coredns [9be646d88f3f4c3d43677b13d759061982b278cec62582a93069acfab88a81cf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50220 - 57105 "HINFO IN 2337932929109627341.7539543411957341480. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.444279123s
	
	
	==> coredns [dca47b48c82d372e8c111ef9f1b2fd5b34da6d82251035a0e5be07fa64b08493] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> describe nodes <==
	Name:               no-preload-673307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-673307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=no-preload-673307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T15_28_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 15:28:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-673307
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 15:49:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 15:45:39 +0000   Mon, 13 Oct 2025 15:28:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 15:45:39 +0000   Mon, 13 Oct 2025 15:28:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 15:45:39 +0000   Mon, 13 Oct 2025 15:28:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 15:45:39 +0000   Mon, 13 Oct 2025 15:31:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.180
	  Hostname:    no-preload-673307
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 1cfd5774df7841e686f57e78cc4438e8
	  System UUID:                1cfd5774-df78-41e6-86f5-7e78cc4438e8
	  Boot ID:                    d3f39062-cfb5-49bd-a190-ed26112d5333
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-vfqml                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     20m
	  kube-system                 etcd-no-preload-673307                        100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-673307              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-673307     200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-v8ndx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-no-preload-673307              100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-746fcd58dc-fx4gj               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fbbs2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dqs5m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 20m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-673307 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-673307 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-673307 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     20m                kubelet          Node no-preload-673307 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m                kubelet          Node no-preload-673307 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m                kubelet          Node no-preload-673307 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Normal   NodeReady                20m                kubelet          Node no-preload-673307 status is now: NodeReady
	  Normal   RegisteredNode           20m                node-controller  Node no-preload-673307 event: Registered Node no-preload-673307 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-673307 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-673307 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-673307 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18m                kubelet          Node no-preload-673307 has been rebooted, boot id: d3f39062-cfb5-49bd-a190-ed26112d5333
	  Normal   RegisteredNode           18m                node-controller  Node no-preload-673307 event: Registered Node no-preload-673307 in Controller
	
	
	==> dmesg <==
	[Oct13 15:31] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000065] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.009150] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.980005] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085500] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.113922] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.618932] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.968954] kauditd_printk_skb: 182 callbacks suppressed
	[  +1.461254] kauditd_printk_skb: 176 callbacks suppressed
	[  +2.929381] kauditd_printk_skb: 41 callbacks suppressed
	[Oct13 15:32] kauditd_printk_skb: 12 callbacks suppressed
	[  +9.981032] kauditd_printk_skb: 7 callbacks suppressed
	[ +23.015573] kauditd_printk_skb: 5 callbacks suppressed
	[Oct13 15:33] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:34] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:37] kauditd_printk_skb: 18 callbacks suppressed
	[Oct13 15:42] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:48] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [c049868803b144f173af37d69998803a723d7e4f596a759002565b5c8858fe03] <==
	{"level":"info","ts":"2025-10-13T15:29:01.834748Z","caller":"traceutil/trace.go:172","msg":"trace[79171425] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"559.004876ms","start":"2025-10-13T15:29:01.275723Z","end":"2025-10-13T15:29:01.834728Z","steps":["trace[79171425] 'process raft request'  (duration: 558.529057ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:01.835761Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:01.276333Z","time spent":"558.357537ms","remote":"127.0.0.1:58982","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":830,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/configmaps/kube-system/coredns\" mod_revision:272 > success:<request_put:<key:\"/registry/configmaps/kube-system/coredns\" value_size:782 >> failure:<request_range:<key:\"/registry/configmaps/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2025-10-13T15:29:01.834504Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"454.642097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-10-13T15:29:01.836550Z","caller":"traceutil/trace.go:172","msg":"trace[1279998901] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:380; }","duration":"456.775528ms","start":"2025-10-13T15:29:01.379760Z","end":"2025-10-13T15:29:01.836535Z","steps":["trace[1279998901] 'agreement among raft nodes before linearized reading'  (duration: 454.181313ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:01.836888Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:01.379741Z","time spent":"457.116792ms","remote":"127.0.0.1:58990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":374,"request content":"key:\"/registry/namespaces/kube-system\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T15:29:01.838465Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:01.275700Z","time spent":"559.33455ms","remote":"127.0.0.1:59150","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4954,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-no-preload-673307\" mod_revision:307 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-no-preload-673307\" value_size:4887 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-no-preload-673307\" > >"}
	{"level":"info","ts":"2025-10-13T15:29:01.934276Z","caller":"traceutil/trace.go:172","msg":"trace[1419914040] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"226.136565ms","start":"2025-10-13T15:29:01.708116Z","end":"2025-10-13T15:29:01.934252Z","steps":["trace[1419914040] 'process raft request'  (duration: 207.051274ms)","trace[1419914040] 'compare'  (duration: 16.946979ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T15:29:03.586185Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5960683286396370121,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-13T15:29:03.772615Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"505.564194ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:29:03.772688Z","caller":"traceutil/trace.go:172","msg":"trace[510750428] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:410; }","duration":"505.711186ms","start":"2025-10-13T15:29:03.266966Z","end":"2025-10-13T15:29:03.772677Z","steps":["trace[510750428] 'range keys from in-memory index tree'  (duration: 505.497011ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:03.773237Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"753.49044ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5960683286396370123 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-ccpqs.186e16972eedd8f0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-ccpqs.186e16972eedd8f0\" value_size:653 lease:5960683286396369334 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-13T15:29:03.773282Z","caller":"traceutil/trace.go:172","msg":"trace[1776426828] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"837.982533ms","start":"2025-10-13T15:29:02.935292Z","end":"2025-10-13T15:29:03.773274Z","steps":["trace[1776426828] 'process raft request'  (duration: 84.256489ms)","trace[1776426828] 'compare'  (duration: 753.356269ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T15:29:03.773308Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:02.935272Z","time spent":"838.026607ms","remote":"127.0.0.1:58922","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":741,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-ccpqs.186e16972eedd8f0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-ccpqs.186e16972eedd8f0\" value_size:653 lease:5960683286396369334 >> failure:<>"}
	{"level":"info","ts":"2025-10-13T15:29:03.774314Z","caller":"traceutil/trace.go:172","msg":"trace[677175873] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"831.26664ms","start":"2025-10-13T15:29:02.943037Z","end":"2025-10-13T15:29:03.774303Z","steps":["trace[677175873] 'process raft request'  (duration: 831.202332ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:03.774440Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:02.942955Z","time spent":"831.388369ms","remote":"127.0.0.1:53978","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:350 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"info","ts":"2025-10-13T15:29:03.774570Z","caller":"traceutil/trace.go:172","msg":"trace[455507024] linearizableReadLoop","detail":"{readStateIndex:427; appliedIndex:428; }","duration":"688.690046ms","start":"2025-10-13T15:29:03.085873Z","end":"2025-10-13T15:29:03.774563Z","steps":["trace[455507024] 'read index received'  (duration: 688.687953ms)","trace[455507024] 'applied index is now lower than readState.Index'  (duration: 1.716µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T15:29:03.774702Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"688.826508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:29:03.774768Z","caller":"traceutil/trace.go:172","msg":"trace[279751344] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:412; }","duration":"688.844964ms","start":"2025-10-13T15:29:03.085868Z","end":"2025-10-13T15:29:03.774713Z","steps":["trace[279751344] 'agreement among raft nodes before linearized reading'  (duration: 688.809579ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:03.774807Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:03.085843Z","time spent":"688.955768ms","remote":"127.0.0.1:59150","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-10-13T15:29:03.887111Z","caller":"traceutil/trace.go:172","msg":"trace[940418926] linearizableReadLoop","detail":"{readStateIndex:428; appliedIndex:428; }","duration":"112.470041ms","start":"2025-10-13T15:29:03.774584Z","end":"2025-10-13T15:29:03.887054Z","steps":["trace[940418926] 'read index received'  (duration: 112.461017ms)","trace[940418926] 'applied index is now lower than readState.Index'  (duration: 6.962µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T15:29:03.898867Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"537.149044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.180\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-10-13T15:29:03.899309Z","caller":"traceutil/trace.go:172","msg":"trace[2023319934] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"115.355704ms","start":"2025-10-13T15:29:03.783938Z","end":"2025-10-13T15:29:03.899294Z","steps":["trace[2023319934] 'process raft request'  (duration: 115.219782ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:29:03.899620Z","caller":"traceutil/trace.go:172","msg":"trace[718732609] range","detail":"{range_begin:/registry/masterleases/192.168.61.180; range_end:; response_count:1; response_revision:412; }","duration":"537.917585ms","start":"2025-10-13T15:29:03.361684Z","end":"2025-10-13T15:29:03.899602Z","steps":["trace[718732609] 'agreement among raft nodes before linearized reading'  (duration: 525.513518ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:29:03.900561Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:03.361666Z","time spent":"538.870774ms","remote":"127.0.0.1:58820","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":1,"response size":158,"request content":"key:\"/registry/masterleases/192.168.61.180\" limit:1 "}
	{"level":"warn","ts":"2025-10-13T15:29:03.899353Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:29:03.308378Z","time spent":"590.970465ms","remote":"127.0.0.1:54082","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> etcd [c10ad89ae3abbf41e4927c24532eb50ca09ac34ecd038f6df274bdadd88c8715] <==
	{"level":"warn","ts":"2025-10-13T15:31:36.217893Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.922512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.218559Z","caller":"traceutil/trace.go:172","msg":"trace[1386832768] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:522; }","duration":"145.788837ms","start":"2025-10-13T15:31:36.072747Z","end":"2025-10-13T15:31:36.218536Z","steps":["trace[1386832768] 'agreement among raft nodes before linearized reading'  (duration: 141.869992ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:31:36.222067Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.24118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.222152Z","caller":"traceutil/trace.go:172","msg":"trace[1480413118] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:522; }","duration":"149.43538ms","start":"2025-10-13T15:31:36.072700Z","end":"2025-10-13T15:31:36.222135Z","steps":["trace[1480413118] 'agreement among raft nodes before linearized reading'  (duration: 149.097701ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:31:36.222981Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.302884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.223049Z","caller":"traceutil/trace.go:172","msg":"trace[886982028] range","detail":"{range_begin:/registry/persistentvolumes; range_end:; response_count:0; response_revision:522; }","duration":"150.378922ms","start":"2025-10-13T15:31:36.072658Z","end":"2025-10-13T15:31:36.223037Z","steps":["trace[886982028] 'agreement among raft nodes before linearized reading'  (duration: 150.188657ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:31:36.232125Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"159.464911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattributesclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.232249Z","caller":"traceutil/trace.go:172","msg":"trace[172312641] range","detail":"{range_begin:/registry/volumeattributesclasses; range_end:; response_count:0; response_revision:522; }","duration":"159.60561ms","start":"2025-10-13T15:31:36.072628Z","end":"2025-10-13T15:31:36.232233Z","steps":["trace[172312641] 'agreement among raft nodes before linearized reading'  (duration: 159.396803ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:31:36.233106Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.487344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:31:36.233195Z","caller":"traceutil/trace.go:172","msg":"trace[639761388] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:522; }","duration":"160.587763ms","start":"2025-10-13T15:31:36.072595Z","end":"2025-10-13T15:31:36.233182Z","steps":["trace[639761388] 'agreement among raft nodes before linearized reading'  (duration: 160.443444ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:40:59.601618Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.512678ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5960683286437292057 > lease_revoke:<id:52b899de32b3e7b0>","response":"size:28"}
	{"level":"info","ts":"2025-10-13T15:40:59.602079Z","caller":"traceutil/trace.go:172","msg":"trace[2058136525] transaction","detail":"{read_only:false; response_revision:1323; number_of_response:1; }","duration":"158.979753ms","start":"2025-10-13T15:40:59.443072Z","end":"2025-10-13T15:40:59.602052Z","steps":["trace[2058136525] 'process raft request'  (duration: 158.829539ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:41:05.691513Z","caller":"traceutil/trace.go:172","msg":"trace[1617815048] transaction","detail":"{read_only:false; response_revision:1328; number_of_response:1; }","duration":"111.473842ms","start":"2025-10-13T15:41:05.579934Z","end":"2025-10-13T15:41:05.691407Z","steps":["trace[1617815048] 'process raft request'  (duration: 111.328559ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:41:30.125594Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1091}
	{"level":"info","ts":"2025-10-13T15:41:30.154925Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1091,"took":"28.791155ms","hash":2836892624,"current-db-size-bytes":3284992,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1425408,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-13T15:41:30.154991Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2836892624,"revision":1091,"compact-revision":-1}
	{"level":"info","ts":"2025-10-13T15:42:48.583610Z","caller":"traceutil/trace.go:172","msg":"trace[628235629] linearizableReadLoop","detail":"{readStateIndex:1604; appliedIndex:1604; }","duration":"115.41719ms","start":"2025-10-13T15:42:48.468135Z","end":"2025-10-13T15:42:48.583552Z","steps":["trace[628235629] 'read index received'  (duration: 115.410324ms)","trace[628235629] 'applied index is now lower than readState.Index'  (duration: 5.715µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T15:42:48.583849Z","caller":"traceutil/trace.go:172","msg":"trace[409231299] transaction","detail":"{read_only:false; response_revision:1418; number_of_response:1; }","duration":"145.699884ms","start":"2025-10-13T15:42:48.438132Z","end":"2025-10-13T15:42:48.583831Z","steps":["trace[409231299] 'process raft request'  (duration: 145.498425ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:42:48.583997Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.80749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2025-10-13T15:42:48.584048Z","caller":"traceutil/trace.go:172","msg":"trace[1963192062] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1418; }","duration":"115.905077ms","start":"2025-10-13T15:42:48.468131Z","end":"2025-10-13T15:42:48.584036Z","steps":["trace[1963192062] 'agreement among raft nodes before linearized reading'  (duration: 115.702943ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:42:48.803837Z","caller":"traceutil/trace.go:172","msg":"trace[173950102] transaction","detail":"{read_only:false; response_revision:1419; number_of_response:1; }","duration":"211.847538ms","start":"2025-10-13T15:42:48.591971Z","end":"2025-10-13T15:42:48.803819Z","steps":["trace[173950102] 'process raft request'  (duration: 211.64801ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:43:09.658894Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"250.364622ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5960683286437292906 > lease_revoke:<id:52b899de32b3eafa>","response":"size:28"}
	{"level":"info","ts":"2025-10-13T15:46:30.133250Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1348}
	{"level":"info","ts":"2025-10-13T15:46:30.139663Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1348,"took":"4.904519ms","hash":2477311159,"current-db-size-bytes":3284992,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-10-13T15:46:30.139730Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2477311159,"revision":1348,"compact-revision":1091}
	
	
	==> kernel <==
	 15:49:53 up 18 min,  0 users,  load average: 0.40, 0.36, 0.21
	Linux no-preload-673307 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5e5dd356ff2ecd3b2a79371993d9db06b0e5f407812a48ae0510dcfaee7b770c] <==
	E1013 15:46:33.775763       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:46:33.775811       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1013 15:46:33.775871       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:46:33.777210       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:47:33.775978       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:47:33.776045       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:47:33.776060       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:47:33.778327       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:47:33.778535       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:47:33.778575       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:49:33.776481       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:49:33.776691       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:49:33.776714       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:49:33.779729       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:49:33.779872       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:49:33.780068       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [97b7ebc7f552a892033fd37731d8cf1db86ef835db80eaa5072d77e823d5ab0f] <==
	I1013 15:28:54.691729       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 15:28:54.785957       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 15:28:59.799591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 15:28:59.807615       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 15:29:00.187888       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 15:29:00.346173       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1013 15:29:24.416292       1 conn.go:339] Error on socket receive: read tcp 192.168.61.180:8443->192.168.61.1:55516: use of closed network connection
	I1013 15:29:25.311417       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1013 15:29:25.325268       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:29:25.325388       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1013 15:29:25.325441       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1013 15:29:25.525877       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.108.75.252"}
	W1013 15:29:25.536965       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:29:25.537140       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1013 15:29:25.557586       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:29:25.557648       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [2709cff04f5c8f3d8e031b98918282f280278e2f018fa8f081540af0ea415234] <==
	I1013 15:43:36.816734       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:44:06.662166       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:44:06.827148       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:44:36.669648       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:44:36.838351       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:45:06.675220       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:45:06.848220       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:45:36.680754       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:45:36.861760       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:46:06.687758       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:46:06.870602       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:46:36.693682       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:46:36.881690       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:47:06.699919       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:47:06.891913       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:47:36.707180       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:47:36.900558       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:48:06.712348       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:48:06.909219       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:48:36.720022       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:48:36.926388       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:49:06.726097       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:49:06.937133       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:49:36.731986       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:49:36.947897       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [b87b6ea0d2c9dabb66c3ff7cdf95b3b641c6ba1e5e14525c946773448e23f04e] <==
	I1013 15:28:59.297425       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 15:28:59.297768       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 15:28:59.298244       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 15:28:59.298343       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 15:28:59.297066       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 15:28:59.301372       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 15:28:59.301709       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 15:28:59.302565       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 15:28:59.302912       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 15:28:59.307099       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 15:28:59.311160       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 15:28:59.322145       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 15:28:59.322270       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 15:28:59.333689       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 15:28:59.342624       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 15:28:59.343351       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-673307" podCIDRs=["10.244.0.0/24"]
	I1013 15:28:59.343444       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 15:28:59.344091       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 15:28:59.346516       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 15:28:59.352566       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 15:28:59.352617       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 15:28:59.352628       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 15:28:59.353806       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 15:28:59.356163       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 15:28:59.358439       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [22670bd9ab09463fa3b05acf2e24db7346873b154520857894403e5e1ac9a3a4] <==
	I1013 15:29:03.121239       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 15:29:03.221763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 15:29:03.221805       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.180"]
	E1013 15:29:03.222279       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 15:29:03.275273       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 15:29:03.275429       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:29:03.275747       1 server_linux.go:132] "Using iptables Proxier"
	I1013 15:29:03.289415       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 15:29:03.290209       1 server.go:527] "Version info" version="v1.34.1"
	I1013 15:29:03.290543       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:29:03.298295       1 config.go:200] "Starting service config controller"
	I1013 15:29:03.298542       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 15:29:03.298793       1 config.go:106] "Starting endpoint slice config controller"
	I1013 15:29:03.298903       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 15:29:03.299121       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 15:29:03.299187       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 15:29:03.304907       1 config.go:309] "Starting node config controller"
	I1013 15:29:03.305047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 15:29:03.305066       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 15:29:03.399273       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 15:29:03.399292       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 15:29:03.399363       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [8a31e632842532c09058356081dc694b6fb32c7e6b806531b0c23108c2db8d89] <==
	I1013 15:31:34.822244       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 15:31:34.923372       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 15:31:34.923780       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.180"]
	E1013 15:31:34.924664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 15:31:35.145588       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 15:31:35.145667       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:31:35.145778       1 server_linux.go:132] "Using iptables Proxier"
	I1013 15:31:35.225337       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 15:31:35.253256       1 server.go:527] "Version info" version="v1.34.1"
	I1013 15:31:35.253508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:31:35.273654       1 config.go:200] "Starting service config controller"
	I1013 15:31:35.275004       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 15:31:35.284661       1 config.go:309] "Starting node config controller"
	I1013 15:31:35.284707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 15:31:35.279399       1 config.go:106] "Starting endpoint slice config controller"
	I1013 15:31:35.326102       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 15:31:35.279503       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 15:31:35.326391       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 15:31:35.384774       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 15:31:35.384825       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 15:31:35.427229       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 15:31:35.427288       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [668e85e990be5774163a840d95ba68ca46711c333066747ce7afa9a54793856a] <==
	E1013 15:28:51.339212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 15:28:51.339289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 15:28:51.339350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 15:28:51.339443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 15:28:51.339503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 15:28:52.162695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 15:28:52.184695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 15:28:52.185257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 15:28:52.254582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 15:28:52.259823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 15:28:52.284962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 15:28:52.311912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 15:28:52.351124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 15:28:52.367698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 15:28:52.436750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 15:28:52.462783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 15:28:52.562081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 15:28:52.613496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 15:28:52.689211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 15:28:52.694028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 15:28:52.766331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 15:28:52.778954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 15:28:52.851270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 15:28:52.908402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1013 15:28:55.011073       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [84ce421cd0a89e02fbf410505ac618d70b2aa42fb66f07c012f81c834b95733e] <==
	I1013 15:31:30.084311       1 serving.go:386] Generated self-signed cert in-memory
	W1013 15:31:32.649506       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 15:31:32.649839       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 15:31:32.649876       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 15:31:32.649883       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 15:31:32.741472       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 15:31:32.741517       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:31:32.751734       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:31:32.751987       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:31:32.755514       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 15:31:32.755968       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 15:31:32.853316       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 15:48:38 no-preload-673307 kubelet[1041]: E1013 15:48:38.427347    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:48:41 no-preload-673307 kubelet[1041]: E1013 15:48:41.430357    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:48:41 no-preload-673307 kubelet[1041]: E1013 15:48:41.430784    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:48:49 no-preload-673307 kubelet[1041]: I1013 15:48:49.426547    1041 scope.go:117] "RemoveContainer" containerID="42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5"
	Oct 13 15:48:49 no-preload-673307 kubelet[1041]: E1013 15:48:49.426805    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:48:52 no-preload-673307 kubelet[1041]: E1013 15:48:52.427943    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:48:54 no-preload-673307 kubelet[1041]: E1013 15:48:54.427391    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:49:01 no-preload-673307 kubelet[1041]: I1013 15:49:01.431354    1041 scope.go:117] "RemoveContainer" containerID="42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5"
	Oct 13 15:49:01 no-preload-673307 kubelet[1041]: E1013 15:49:01.431560    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:49:05 no-preload-673307 kubelet[1041]: E1013 15:49:05.428723    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:49:08 no-preload-673307 kubelet[1041]: E1013 15:49:08.427790    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:49:15 no-preload-673307 kubelet[1041]: I1013 15:49:15.430054    1041 scope.go:117] "RemoveContainer" containerID="42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5"
	Oct 13 15:49:15 no-preload-673307 kubelet[1041]: E1013 15:49:15.430223    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:49:19 no-preload-673307 kubelet[1041]: E1013 15:49:19.430004    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:49:19 no-preload-673307 kubelet[1041]: E1013 15:49:19.430805    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:49:28 no-preload-673307 kubelet[1041]: I1013 15:49:28.427231    1041 scope.go:117] "RemoveContainer" containerID="42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5"
	Oct 13 15:49:28 no-preload-673307 kubelet[1041]: E1013 15:49:28.427491    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:49:32 no-preload-673307 kubelet[1041]: E1013 15:49:32.427572    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:49:32 no-preload-673307 kubelet[1041]: E1013 15:49:32.429092    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:49:39 no-preload-673307 kubelet[1041]: I1013 15:49:39.426598    1041 scope.go:117] "RemoveContainer" containerID="42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5"
	Oct 13 15:49:39 no-preload-673307 kubelet[1041]: E1013 15:49:39.426776    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	Oct 13 15:49:44 no-preload-673307 kubelet[1041]: E1013 15:49:44.427494    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-fx4gj" podUID="2445a7fe-b77c-44f6-bc4a-704b06b3c4fd"
	Oct 13 15:49:46 no-preload-673307 kubelet[1041]: E1013 15:49:46.428967    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqs5m" podUID="3a5ccb4a-aa9f-4d3f-8325-dc5d395b1ae7"
	Oct 13 15:49:51 no-preload-673307 kubelet[1041]: I1013 15:49:51.427357    1041 scope.go:117] "RemoveContainer" containerID="42257f6a74732c12fe9cc464aba0beaccf9e270b36920b96909bd985cffd8eb5"
	Oct 13 15:49:51 no-preload-673307 kubelet[1041]: E1013 15:49:51.427775    1041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fbbs2_kubernetes-dashboard(3fc51e63-1b5c-452c-9513-928f945dc4ef)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fbbs2" podUID="3fc51e63-1b5c-452c-9513-928f945dc4ef"
	
	
	==> storage-provisioner [68b3fdbaad74bfc96f73bc11bd3d91ea38819384d2ba896d82b799b59960cf1d] <==
	W1013 15:49:29.547546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:31.554510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:31.566131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:33.570092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:33.576616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:35.581552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:35.590656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:37.594151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:37.604460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:39.613327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:39.623146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:41.629042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:41.635725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:43.640562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:43.646602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:45.650308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:45.656303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:47.665564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:47.675282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:49.680807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:49.689821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:51.694406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:51.702932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:53.707202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:49:53.720582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c8d68c0b5b0042ec3af32daf76852a75c8bbac2763603d8dce81657460ae9288] <==
	I1013 15:31:34.417706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 15:32:04.427642       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673307 -n no-preload-673307
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-673307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-fx4gj kubernetes-dashboard-855c9754f9-dqs5m
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-673307 describe pod metrics-server-746fcd58dc-fx4gj kubernetes-dashboard-855c9754f9-dqs5m
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-673307 describe pod metrics-server-746fcd58dc-fx4gj kubernetes-dashboard-855c9754f9-dqs5m: exit status 1 (66.060093ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-fx4gj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dqs5m" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-673307 describe pod metrics-server-746fcd58dc-fx4gj kubernetes-dashboard-855c9754f9-dqs5m: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (543.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v4zfv" [424f9607-da65-4bb7-be75-cf1ef1421095] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-516717 -n embed-certs-516717
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-10-13 15:50:23.948542811 +0000 UTC m=+6914.889101198
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-516717 describe po kubernetes-dashboard-855c9754f9-v4zfv -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-516717 describe po kubernetes-dashboard-855c9754f9-v4zfv -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-v4zfv
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-516717/192.168.72.104
Start Time:       Mon, 13 Oct 2025 15:32:13 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ndtp2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-ndtp2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                   From               Message
----     ------            ----                  ----               -------
Warning  FailedScheduling  18m                   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv to embed-certs-516717
Warning  Failed            16m (x4 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling           15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            15m (x5 over 18m)     kubelet            Error: ErrImagePull
Warning  Failed            15m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff           2m56s (x67 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            2m56s (x67 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-516717 logs kubernetes-dashboard-855c9754f9-v4zfv -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-516717 logs kubernetes-dashboard-855c9754f9-v4zfv -n kubernetes-dashboard: exit status 1 (88.184968ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-v4zfv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-516717 logs kubernetes-dashboard-855c9754f9-v4zfv -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-516717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-516717 -n embed-certs-516717
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-516717 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-516717 logs -n 25: (1.768318591s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬────
─────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                     ARGS                                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼────
─────────────────┼─────────────────────┤
	│ start   │ -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:40 UTC │ 13 Oct 25 15:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-426789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                            │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ stop    │ -p default-k8s-diff-port-426789 --alsologtostderr -v=3                                                                                                                                                                                                                        │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:43 UTC │
	│ image   │ old-k8s-version-316150 image list --format=json                                                                                                                                                                                                                               │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ pause   │ -p old-k8s-version-316150 --alsologtostderr -v=1                                                                                                                                                                                                                              │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ unpause │ -p old-k8s-version-316150 --alsologtostderr -v=1                                                                                                                                                                                                                              │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ delete  │ -p old-k8s-version-316150                                                                                                                                                                                                                                                     │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ delete  │ -p old-k8s-version-316150                                                                                                                                                                                                                                                     │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ start   │ -p newest-cni-400509 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:43 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-426789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                                       │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ start   │ -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-400509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                                       │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ stop    │ -p newest-cni-400509 --alsologtostderr -v=3                                                                                                                                                                                                                                   │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-400509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                                                  │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ start   │ -p newest-cni-400509 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:44 UTC │
	│ image   │ newest-cni-400509 image list --format=json                                                                                                                                                                                                                                    │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ pause   │ -p newest-cni-400509 --alsologtostderr -v=1                                                                                                                                                                                                                                   │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ unpause │ -p newest-cni-400509 --alsologtostderr -v=1                                                                                                                                                                                                                                   │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ delete  │ -p newest-cni-400509                                                                                                                                                                                                                                                          │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ delete  │ -p newest-cni-400509                                                                                                                                                                                                                                                          │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ image   │ no-preload-673307 image list --format=json                                                                                                                                                                                                                                    │ no-preload-673307            │ jenkins │ v1.37.0 │ 13 Oct 25 15:49 UTC │ 13 Oct 25 15:49 UTC │
	│ pause   │ -p no-preload-673307 --alsologtostderr -v=1                                                                                                                                                                                                                                   │ no-preload-673307            │ jenkins │ v1.37.0 │ 13 Oct 25 15:49 UTC │ 13 Oct 25 15:49 UTC │
	│ unpause │ -p no-preload-673307 --alsologtostderr -v=1                                                                                                                                                                                                                                   │ no-preload-673307            │ jenkins │ v1.37.0 │ 13 Oct 25 15:49 UTC │ 13 Oct 25 15:49 UTC │
	│ delete  │ -p no-preload-673307                                                                                                                                                                                                                                                          │ no-preload-673307            │ jenkins │ v1.37.0 │ 13 Oct 25 15:49 UTC │ 13 Oct 25 15:49 UTC │
	│ delete  │ -p no-preload-673307                                                                                                                                                                                                                                                          │ no-preload-673307            │ jenkins │ v1.37.0 │ 13 Oct 25 15:49 UTC │ 13 Oct 25 15:49 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴────
─────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 15:43:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 15:43:36.713594 1881569 out.go:360] Setting OutFile to fd 1 ...
	I1013 15:43:36.713867 1881569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:43:36.713876 1881569 out.go:374] Setting ErrFile to fd 2...
	I1013 15:43:36.713881 1881569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:43:36.714128 1881569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 15:43:36.714601 1881569 out.go:368] Setting JSON to false
	I1013 15:43:36.715659 1881569 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":26765,"bootTime":1760343452,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 15:43:36.715764 1881569 start.go:141] virtualization: kvm guest
	I1013 15:43:36.717879 1881569 out.go:179] * [newest-cni-400509] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 15:43:36.719306 1881569 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 15:43:36.719352 1881569 notify.go:220] Checking for updates...
	I1013 15:43:36.722297 1881569 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 15:43:36.723784 1881569 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:43:36.728380 1881569 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:43:36.729831 1881569 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 15:43:36.731178 1881569 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 15:43:36.733044 1881569 config.go:182] Loaded profile config "newest-cni-400509": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:36.733466 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:36.733553 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:36.748649 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1013 15:43:36.749362 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:36.749950 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:43:36.749983 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:36.750498 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:36.750765 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:36.751059 1881569 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 15:43:36.751384 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:36.751424 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:36.766235 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I1013 15:43:36.766738 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:36.767297 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:43:36.767322 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:36.767684 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:36.767908 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:36.805154 1881569 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 15:43:36.806336 1881569 start.go:305] selected driver: kvm2
	I1013 15:43:36.806354 1881569 start.go:925] validating driver "kvm2" against &{Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTi
meout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:43:36.806467 1881569 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 15:43:36.807212 1881569 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:43:36.807326 1881569 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:43:36.823011 1881569 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:43:36.823050 1881569 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:43:36.837875 1881569 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:43:36.838417 1881569 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 15:43:36.838458 1881569 cni.go:84] Creating CNI manager for ""
	I1013 15:43:36.838518 1881569 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:36.838573 1881569 start.go:349] cluster config:
	{Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400509 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:43:36.838736 1881569 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:43:36.841828 1881569 out.go:179] * Starting "newest-cni-400509" primary control-plane node in "newest-cni-400509" cluster
	I1013 15:43:35.461409 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | SSH cmd err, output: exit status 255: 
	I1013 15:43:35.461442 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1013 15:43:35.461456 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | command : exit 0
	I1013 15:43:35.461470 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | err     : exit status 255
	I1013 15:43:35.461482 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | output  : 
	I1013 15:43:38.463606 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Getting to WaitForSSH function...
	I1013 15:43:38.467055 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.467542 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.467571 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.467755 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Using SSH client type: external
	I1013 15:43:38.467781 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa (-rw-------)
	I1013 15:43:38.467825 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:43:38.467840 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | About to run SSH command:
	I1013 15:43:38.467903 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | exit 0
	I1013 15:43:36.843198 1881569 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:43:36.843293 1881569 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 15:43:36.843334 1881569 cache.go:58] Caching tarball of preloaded images
	I1013 15:43:36.843490 1881569 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 15:43:36.843509 1881569 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 15:43:36.843683 1881569 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/config.json ...
	I1013 15:43:36.843944 1881569 start.go:360] acquireMachinesLock for newest-cni-400509: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 15:43:39.632101 1881569 start.go:364] duration metric: took 2.788099128s to acquireMachinesLock for "newest-cni-400509"
	I1013 15:43:39.632152 1881569 start.go:96] Skipping create...Using existing machine configuration
	I1013 15:43:39.632159 1881569 fix.go:54] fixHost starting: 
	I1013 15:43:39.632598 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:39.632657 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:39.649454 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37131
	I1013 15:43:39.650005 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:39.650546 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:43:39.650575 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:39.651029 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:39.651238 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:39.651401 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:43:39.654204 1881569 fix.go:112] recreateIfNeeded on newest-cni-400509: state=Stopped err=<nil>
	I1013 15:43:39.654249 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	W1013 15:43:39.654457 1881569 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 15:43:39.656851 1881569 out.go:252] * Restarting existing kvm2 VM for "newest-cni-400509" ...
	I1013 15:43:39.656907 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Start
	I1013 15:43:39.657076 1881569 main.go:141] libmachine: (newest-cni-400509) starting domain...
	I1013 15:43:39.657101 1881569 main.go:141] libmachine: (newest-cni-400509) ensuring networks are active...
	I1013 15:43:39.657900 1881569 main.go:141] libmachine: (newest-cni-400509) Ensuring network default is active
	I1013 15:43:39.658431 1881569 main.go:141] libmachine: (newest-cni-400509) Ensuring network mk-newest-cni-400509 is active
	I1013 15:43:39.658999 1881569 main.go:141] libmachine: (newest-cni-400509) getting domain XML...
	I1013 15:43:39.660153 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | starting domain XML:
	I1013 15:43:39.660177 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | <domain type='kvm'>
	I1013 15:43:39.660215 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <name>newest-cni-400509</name>
	I1013 15:43:39.660260 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <uuid>27888586-a2e0-44db-a3c9-b78f39af9148</uuid>
	I1013 15:43:39.660278 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <memory unit='KiB'>3145728</memory>
	I1013 15:43:39.660290 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1013 15:43:39.660307 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 15:43:39.660324 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <os>
	I1013 15:43:39.660338 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 15:43:39.660350 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <boot dev='cdrom'/>
	I1013 15:43:39.660363 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <boot dev='hd'/>
	I1013 15:43:39.660374 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <bootmenu enable='no'/>
	I1013 15:43:39.660381 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   </os>
	I1013 15:43:39.660390 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <features>
	I1013 15:43:39.660431 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <acpi/>
	I1013 15:43:39.660458 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <apic/>
	I1013 15:43:39.660475 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <pae/>
	I1013 15:43:39.660482 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   </features>
	I1013 15:43:39.660495 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 15:43:39.660517 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <clock offset='utc'/>
	I1013 15:43:39.660527 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 15:43:39.660535 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <on_reboot>restart</on_reboot>
	I1013 15:43:39.660544 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <on_crash>destroy</on_crash>
	I1013 15:43:39.660554 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <devices>
	I1013 15:43:39.660565 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 15:43:39.660576 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <disk type='file' device='cdrom'>
	I1013 15:43:39.660585 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <driver name='qemu' type='raw'/>
	I1013 15:43:39.660601 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/boot2docker.iso'/>
	I1013 15:43:39.660614 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 15:43:39.660624 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <readonly/>
	I1013 15:43:39.660636 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 15:43:39.660645 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </disk>
	I1013 15:43:39.660655 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <disk type='file' device='disk'>
	I1013 15:43:39.660666 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 15:43:39.660683 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/newest-cni-400509.rawdisk'/>
	I1013 15:43:39.660701 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target dev='hda' bus='virtio'/>
	I1013 15:43:39.660725 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 15:43:39.660734 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </disk>
	I1013 15:43:39.660746 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 15:43:39.660766 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 15:43:39.660777 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </controller>
	I1013 15:43:39.660795 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 15:43:39.660809 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 15:43:39.660833 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 15:43:39.660845 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </controller>
	I1013 15:43:39.660852 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <interface type='network'>
	I1013 15:43:39.660865 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <mac address='52:54:00:a8:3a:80'/>
	I1013 15:43:39.660880 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source network='mk-newest-cni-400509'/>
	I1013 15:43:39.660909 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <model type='virtio'/>
	I1013 15:43:39.660934 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 15:43:39.660966 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </interface>
	I1013 15:43:39.660982 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <interface type='network'>
	I1013 15:43:39.660998 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <mac address='52:54:00:ee:bd:4a'/>
	I1013 15:43:39.661014 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source network='default'/>
	I1013 15:43:39.661026 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <model type='virtio'/>
	I1013 15:43:39.661044 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 15:43:39.661064 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </interface>
	I1013 15:43:39.661072 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <serial type='pty'>
	I1013 15:43:39.661080 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target type='isa-serial' port='0'>
	I1013 15:43:39.661093 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |         <model name='isa-serial'/>
	I1013 15:43:39.661105 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       </target>
	I1013 15:43:39.661112 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </serial>
	I1013 15:43:39.661125 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <console type='pty'>
	I1013 15:43:39.661132 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target type='serial' port='0'/>
	I1013 15:43:39.661139 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </console>
	I1013 15:43:39.661146 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <input type='mouse' bus='ps2'/>
	I1013 15:43:39.661173 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 15:43:39.661192 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <audio id='1' type='none'/>
	I1013 15:43:39.661213 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <memballoon model='virtio'>
	I1013 15:43:39.661263 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 15:43:39.661276 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </memballoon>
	I1013 15:43:39.661285 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <rng model='virtio'>
	I1013 15:43:39.661305 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <backend model='random'>/dev/random</backend>
	I1013 15:43:39.661325 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 15:43:39.661337 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </rng>
	I1013 15:43:39.661348 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   </devices>
	I1013 15:43:39.661357 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | </domain>
	I1013 15:43:39.661367 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | 
	I1013 15:43:40.126826 1881569 main.go:141] libmachine: (newest-cni-400509) waiting for domain to start...
	I1013 15:43:40.128784 1881569 main.go:141] libmachine: (newest-cni-400509) domain is now running
	I1013 15:43:40.128813 1881569 main.go:141] libmachine: (newest-cni-400509) waiting for IP...
	I1013 15:43:40.129922 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.130919 1881569 main.go:141] libmachine: (newest-cni-400509) found domain IP: 192.168.39.58
	I1013 15:43:40.130941 1881569 main.go:141] libmachine: (newest-cni-400509) reserving static IP address...
	I1013 15:43:40.130955 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has current primary IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.131624 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "newest-cni-400509", mac: "52:54:00:a8:3a:80", ip: "192.168.39.58"} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:42:58 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:40.131659 1881569 main.go:141] libmachine: (newest-cni-400509) reserved static IP address 192.168.39.58 for domain newest-cni-400509
	I1013 15:43:40.131687 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | skip adding static IP to network mk-newest-cni-400509 - found existing host DHCP lease matching {name: "newest-cni-400509", mac: "52:54:00:a8:3a:80", ip: "192.168.39.58"}
	I1013 15:43:40.131707 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Getting to WaitForSSH function...
	I1013 15:43:40.131747 1881569 main.go:141] libmachine: (newest-cni-400509) waiting for SSH...
	I1013 15:43:40.134418 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.134976 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:42:58 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:40.135005 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.135191 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH client type: external
	I1013 15:43:40.135247 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa (-rw-------)
	I1013 15:43:40.135291 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:43:40.135327 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | About to run SSH command:
	I1013 15:43:40.135339 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | exit 0
	I1013 15:43:38.610349 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | SSH cmd err, output: <nil>: 
	I1013 15:43:38.610819 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetConfigRaw
	I1013 15:43:38.611609 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:38.614998 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.615542 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.615574 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.615849 1881287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/config.json ...
	I1013 15:43:38.616089 1881287 machine.go:93] provisionDockerMachine start ...
	I1013 15:43:38.616107 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:38.616354 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:38.619808 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.620495 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.620528 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.620763 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:38.620947 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.621205 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.621440 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:38.621677 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:38.621969 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:38.621982 1881287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 15:43:38.741296 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1013 15:43:38.741340 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:43:38.741648 1881287 buildroot.go:166] provisioning hostname "default-k8s-diff-port-426789"
	I1013 15:43:38.741682 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:43:38.741931 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:38.745516 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.746082 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.746124 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.746340 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:38.746557 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.746778 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.746938 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:38.747114 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:38.747384 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:38.747401 1881287 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-426789 && echo "default-k8s-diff-port-426789" | sudo tee /etc/hostname
	I1013 15:43:38.883536 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-426789
	
	I1013 15:43:38.883566 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:38.886934 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.887401 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.887445 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.887640 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:38.887893 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.888084 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.888211 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:38.888374 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:38.888582 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:38.888599 1881287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-426789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-426789/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-426789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 15:43:39.017088 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 15:43:39.017119 1881287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 15:43:39.017144 1881287 buildroot.go:174] setting up certificates
	I1013 15:43:39.017158 1881287 provision.go:84] configureAuth start
	I1013 15:43:39.017194 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:43:39.017591 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:39.020991 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.021443 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.021466 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.021667 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.024308 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.024740 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.024775 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.025056 1881287 provision.go:143] copyHostCerts
	I1013 15:43:39.025124 1881287 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem, removing ...
	I1013 15:43:39.025142 1881287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem
	I1013 15:43:39.025243 1881287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 15:43:39.025421 1881287 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem, removing ...
	I1013 15:43:39.025436 1881287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem
	I1013 15:43:39.025483 1881287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 15:43:39.025608 1881287 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem, removing ...
	I1013 15:43:39.025622 1881287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem
	I1013 15:43:39.025662 1881287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 15:43:39.025772 1881287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-426789 san=[127.0.0.1 192.168.50.176 default-k8s-diff-port-426789 localhost minikube]
	I1013 15:43:39.142099 1881287 provision.go:177] copyRemoteCerts
	I1013 15:43:39.142168 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 15:43:39.142198 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.146110 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.146639 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.146665 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.146950 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.147180 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.147364 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.147518 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.238167 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 15:43:39.273616 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 15:43:39.314055 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 15:43:39.358579 1881287 provision.go:87] duration metric: took 341.404418ms to configureAuth
	I1013 15:43:39.358616 1881287 buildroot.go:189] setting minikube options for container-runtime
	I1013 15:43:39.358839 1881287 config.go:182] Loaded profile config "default-k8s-diff-port-426789": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:39.358854 1881287 machine.go:96] duration metric: took 742.756264ms to provisionDockerMachine
	I1013 15:43:39.358864 1881287 start.go:293] postStartSetup for "default-k8s-diff-port-426789" (driver="kvm2")
	I1013 15:43:39.358874 1881287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 15:43:39.358903 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.359307 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 15:43:39.359349 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.362558 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.362951 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.362982 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.363306 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.363546 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.363773 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.363949 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.454925 1881287 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 15:43:39.460515 1881287 info.go:137] Remote host: Buildroot 2025.02
	I1013 15:43:39.460550 1881287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 15:43:39.460650 1881287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 15:43:39.460784 1881287 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem -> 18149272.pem in /etc/ssl/certs
	I1013 15:43:39.460899 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 15:43:39.474542 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:43:39.506976 1881287 start.go:296] duration metric: took 148.091906ms for postStartSetup
	I1013 15:43:39.507038 1881287 fix.go:56] duration metric: took 15.862602997s for fixHost
	I1013 15:43:39.507067 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.510376 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.510803 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.510837 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.511112 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.511361 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.511540 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.511666 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.511848 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:39.512046 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:39.512057 1881287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 15:43:39.631899 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760370219.586411289
	
	I1013 15:43:39.631925 1881287 fix.go:216] guest clock: 1760370219.586411289
	I1013 15:43:39.631933 1881287 fix.go:229] Guest: 2025-10-13 15:43:39.586411289 +0000 UTC Remote: 2025-10-13 15:43:39.507044166 +0000 UTC m=+16.050668033 (delta=79.367123ms)
	I1013 15:43:39.631970 1881287 fix.go:200] guest clock delta is within tolerance: 79.367123ms
	I1013 15:43:39.631976 1881287 start.go:83] releasing machines lock for "default-k8s-diff-port-426789", held for 15.987562481s
	I1013 15:43:39.632004 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.632313 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:39.636049 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.636504 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.636554 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.636797 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.637455 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.637669 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.637818 1881287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 15:43:39.637878 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.637920 1881287 ssh_runner.go:195] Run: cat /version.json
	I1013 15:43:39.637952 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.641477 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.641517 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.641994 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.642042 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.642070 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.642087 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.642314 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.642327 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.642551 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.642554 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.642858 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.642902 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.643095 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.643095 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.734708 1881287 ssh_runner.go:195] Run: systemctl --version
	I1013 15:43:39.760037 1881287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 15:43:39.768523 1881287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 15:43:39.768671 1881287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 15:43:39.792919 1881287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 15:43:39.792950 1881287 start.go:495] detecting cgroup driver to use...
	I1013 15:43:39.793023 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 15:43:39.831232 1881287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 15:43:39.850993 1881287 docker.go:218] disabling cri-docker service (if available) ...
	I1013 15:43:39.851102 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 15:43:39.873826 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 15:43:39.896556 1881287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 15:43:40.064028 1881287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 15:43:40.305591 1881287 docker.go:234] disabling docker service ...
	I1013 15:43:40.305667 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 15:43:40.324329 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 15:43:40.340817 1881287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 15:43:40.541438 1881287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 15:43:40.704419 1881287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 15:43:40.723755 1881287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 15:43:40.752026 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 15:43:40.767452 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 15:43:40.782881 1881287 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 15:43:40.782958 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 15:43:40.798473 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:40.813327 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 15:43:40.828869 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:40.843772 1881287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 15:43:40.859620 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 15:43:40.876007 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 15:43:40.891780 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 15:43:40.907887 1881287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 15:43:40.919493 1881287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 15:43:40.919559 1881287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 15:43:40.950308 1881287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 15:43:40.968591 1881287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:41.139186 1881287 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 15:43:41.183301 1881287 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 15:43:41.183403 1881287 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:41.190223 1881287 retry.go:31] will retry after 1.16806029s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 15:43:42.358579 1881287 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:42.366926 1881287 start.go:563] Will wait 60s for crictl version
	I1013 15:43:42.367063 1881287 ssh_runner.go:195] Run: which crictl
	I1013 15:43:42.372655 1881287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 15:43:42.429723 1881287 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 15:43:42.429814 1881287 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:42.471739 1881287 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:42.509604 1881287 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 15:43:42.511075 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:42.514790 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:42.515349 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:42.515383 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:42.515708 1881287 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1013 15:43:42.520820 1881287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:43:42.537702 1881287 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 15:43:42.537834 1881287 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:43:42.537882 1881287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:42.577897 1881287 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:42.577934 1881287 containerd.go:534] Images already preloaded, skipping extraction
	I1013 15:43:42.578012 1881287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:42.626753 1881287 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:42.626790 1881287 cache_images.go:85] Images are preloaded, skipping loading
	I1013 15:43:42.626816 1881287 kubeadm.go:934] updating node { 192.168.50.176 8444 v1.34.1 containerd true true} ...
	I1013 15:43:42.626973 1881287 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-426789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 15:43:42.627112 1881287 ssh_runner.go:195] Run: sudo crictl info
	I1013 15:43:42.670994 1881287 cni.go:84] Creating CNI manager for ""
	I1013 15:43:42.671035 1881287 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:42.671067 1881287 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 15:43:42.671108 1881287 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.176 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-426789 NodeName:default-k8s-diff-port-426789 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 15:43:42.671296 1881287 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.176
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-426789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 15:43:42.671382 1881287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 15:43:42.685850 1881287 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 15:43:42.685938 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 15:43:42.702293 1881287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1013 15:43:42.726402 1881287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 15:43:42.754908 1881287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2247 bytes)
	I1013 15:43:42.782246 1881287 ssh_runner.go:195] Run: grep 192.168.50.176	control-plane.minikube.internal$ /etc/hosts
	I1013 15:43:42.788445 1881287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:43:42.806629 1881287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:42.987595 1881287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:43:43.027112 1881287 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789 for IP: 192.168.50.176
	I1013 15:43:43.027140 1881287 certs.go:195] generating shared ca certs ...
	I1013 15:43:43.027163 1881287 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:43.027383 1881287 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 15:43:43.027460 1881287 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 15:43:43.027483 1881287 certs.go:257] generating profile certs ...
	I1013 15:43:43.027635 1881287 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/client.key
	I1013 15:43:43.027760 1881287 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key.1e9a3db8
	I1013 15:43:43.027826 1881287 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.key
	I1013 15:43:43.027999 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem (1338 bytes)
	W1013 15:43:43.028050 1881287 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927_empty.pem, impossibly tiny 0 bytes
	I1013 15:43:43.028066 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 15:43:43.028098 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 15:43:43.028131 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 15:43:43.028163 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 15:43:43.028239 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:43:43.029002 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 15:43:43.082431 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 15:43:43.140436 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 15:43:43.210359 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 15:43:43.257226 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 15:43:43.298663 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 15:43:43.332285 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 15:43:43.369205 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 15:43:43.410586 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 15:43:43.451819 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem --> /usr/share/ca-certificates/1814927.pem (1338 bytes)
	I1013 15:43:43.486367 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /usr/share/ca-certificates/18149272.pem (1708 bytes)
	I1013 15:43:43.524801 1881287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 15:43:43.547937 1881287 ssh_runner.go:195] Run: openssl version
	I1013 15:43:43.555474 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 15:43:43.571070 1881287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:43:43.579175 1881287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:43:43.579263 1881287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:43:43.587603 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 15:43:43.604566 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1814927.pem && ln -fs /usr/share/ca-certificates/1814927.pem /etc/ssl/certs/1814927.pem"
	I1013 15:43:43.620309 1881287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1814927.pem
	I1013 15:43:43.626957 1881287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 14:22 /usr/share/ca-certificates/1814927.pem
	I1013 15:43:43.627045 1881287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1814927.pem
	I1013 15:43:43.635543 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1814927.pem /etc/ssl/certs/51391683.0"
	I1013 15:43:43.651153 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18149272.pem && ln -fs /usr/share/ca-certificates/18149272.pem /etc/ssl/certs/18149272.pem"
	I1013 15:43:43.666800 1881287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18149272.pem
	I1013 15:43:43.674478 1881287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 14:22 /usr/share/ca-certificates/18149272.pem
	I1013 15:43:43.674540 1881287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18149272.pem
	I1013 15:43:43.685525 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18149272.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 15:43:43.702224 1881287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 15:43:43.709862 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 15:43:43.720756 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 15:43:43.729444 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 15:43:43.737616 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 15:43:43.745934 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 15:43:43.754091 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 15:43:43.762115 1881287 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:43:43.762208 1881287 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 15:43:43.762293 1881287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:43:43.808267 1881287 cri.go:89] found id: "7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8"
	I1013 15:43:43.808301 1881287 cri.go:89] found id: "23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3"
	I1013 15:43:43.808306 1881287 cri.go:89] found id: "5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c"
	I1013 15:43:43.808312 1881287 cri.go:89] found id: "72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2"
	I1013 15:43:43.808316 1881287 cri.go:89] found id: "f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996"
	I1013 15:43:43.808322 1881287 cri.go:89] found id: "d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929"
	I1013 15:43:43.808327 1881287 cri.go:89] found id: "ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547"
	I1013 15:43:43.808338 1881287 cri.go:89] found id: ""
	I1013 15:43:43.808404 1881287 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1013 15:43:43.831377 1881287 kubeadm.go:407] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T15:43:43Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1013 15:43:43.831483 1881287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 15:43:43.845227 1881287 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 15:43:43.845260 1881287 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 15:43:43.845327 1881287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 15:43:43.863194 1881287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 15:43:43.864292 1881287 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-426789" does not appear in /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:43:43.864923 1881287 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-1810975/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-426789" cluster setting kubeconfig missing "default-k8s-diff-port-426789" context setting]
	I1013 15:43:43.865728 1881287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:43.867585 1881287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 15:43:43.883585 1881287 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.50.176
	I1013 15:43:43.883642 1881287 kubeadm.go:1160] stopping kube-system containers ...
	I1013 15:43:43.883662 1881287 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1013 15:43:43.883756 1881287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:43:43.948818 1881287 cri.go:89] found id: "7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8"
	I1013 15:43:43.948851 1881287 cri.go:89] found id: "23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3"
	I1013 15:43:43.948857 1881287 cri.go:89] found id: "5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c"
	I1013 15:43:43.948863 1881287 cri.go:89] found id: "72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2"
	I1013 15:43:43.948868 1881287 cri.go:89] found id: "f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996"
	I1013 15:43:43.948872 1881287 cri.go:89] found id: "d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929"
	I1013 15:43:43.948876 1881287 cri.go:89] found id: "ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547"
	I1013 15:43:43.948880 1881287 cri.go:89] found id: ""
	I1013 15:43:43.948890 1881287 cri.go:252] Stopping containers: [7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8 23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3 5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c 72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2 f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996 d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929 ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547]
	I1013 15:43:43.948976 1881287 ssh_runner.go:195] Run: which crictl
	I1013 15:43:43.955264 1881287 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8 23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3 5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c 72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2 f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996 d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929 ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547
	I1013 15:43:44.001390 1881287 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1013 15:43:44.022439 1881287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 15:43:44.035325 1881287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 15:43:44.035351 1881287 kubeadm.go:157] found existing configuration files:
	
	I1013 15:43:44.035411 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1013 15:43:44.047208 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 15:43:44.047292 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 15:43:44.060647 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1013 15:43:44.074202 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 15:43:44.074279 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 15:43:44.088532 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1013 15:43:44.103533 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 15:43:44.103601 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 15:43:44.122077 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1013 15:43:44.134937 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 15:43:44.135018 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 15:43:44.147842 1881287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 15:43:44.162447 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:44.318010 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:45.992643 1881287 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.674585761s)
	I1013 15:43:45.992768 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:46.260999 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:46.358031 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:46.484897 1881287 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:43:46.485026 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:46.986001 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:47.485965 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:47.985368 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:48.031141 1881287 api_server.go:72] duration metric: took 1.546261555s to wait for apiserver process to appear ...
	I1013 15:43:48.031174 1881287 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:43:48.031199 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:51.397143 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | SSH cmd err, output: exit status 255: 
	I1013 15:43:51.397186 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1013 15:43:51.397205 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | command : exit 0
	I1013 15:43:51.397214 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | err     : exit status 255
	I1013 15:43:51.397235 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | output  : 
	I1013 15:43:50.751338 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:43:50.751376 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:43:50.751412 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:50.842254 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:43:50.842294 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:43:51.031709 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:51.038850 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:43:51.038888 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:43:51.531498 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:51.540163 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:43:51.540193 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:43:52.031686 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:52.042465 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:43:52.042504 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:43:52.531913 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:52.538420 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 200:
	ok
	I1013 15:43:52.550202 1881287 api_server.go:141] control plane version: v1.34.1
	I1013 15:43:52.550246 1881287 api_server.go:131] duration metric: took 4.519061614s to wait for apiserver health ...
	I1013 15:43:52.550262 1881287 cni.go:84] Creating CNI manager for ""
	I1013 15:43:52.550273 1881287 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:52.552571 1881287 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 15:43:52.554067 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 15:43:52.574739 1881287 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 15:43:52.604706 1881287 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:43:52.613468 1881287 system_pods.go:59] 8 kube-system pods found
	I1013 15:43:52.613525 1881287 system_pods.go:61] "coredns-66bc5c9577-7mm74" [a6965960-a658-468c-a225-0a99e4ee6d29] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:43:52.613537 1881287 system_pods.go:61] "etcd-default-k8s-diff-port-426789" [97d29e80-2aae-46cb-b01c-2c94280cd2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:43:52.613547 1881287 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-426789" [b6f928ae-7bf8-48a8-b3df-251e2c47c935] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:43:52.613563 1881287 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-426789" [fffd4380-39d1-482a-a943-ac4ce7f67a82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:43:52.613576 1881287 system_pods.go:61] "kube-proxy-2vt8l" [1bae3750-c6df-46d8-8b33-130e1773600a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 15:43:52.613595 1881287 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-426789" [1cf8ece0-4fbc-4ab1-9ec8-d206af58f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:43:52.613609 1881287 system_pods.go:61] "metrics-server-746fcd58dc-mqvqg" [e7582897-ca82-4255-9bc3-8e9563b9e410] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:43:52.613617 1881287 system_pods.go:61] "storage-provisioner" [ff2ac22d-9091-4b0c-b7fd-0c2e3e7c0062] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 15:43:52.613628 1881287 system_pods.go:74] duration metric: took 8.879878ms to wait for pod list to return data ...
	I1013 15:43:52.613643 1881287 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:43:52.618132 1881287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:43:52.618175 1881287 node_conditions.go:123] node cpu capacity is 2
	I1013 15:43:52.618192 1881287 node_conditions.go:105] duration metric: took 4.543501ms to run NodePressure ...
	I1013 15:43:52.618275 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:53.069625 1881287 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1013 15:43:53.076322 1881287 kubeadm.go:743] kubelet initialised
	I1013 15:43:53.076353 1881287 kubeadm.go:744] duration metric: took 6.69335ms waiting for restarted kubelet to initialise ...
	I1013 15:43:53.076378 1881287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 15:43:53.108126 1881287 ops.go:34] apiserver oom_adj: -16
	I1013 15:43:53.108163 1881287 kubeadm.go:601] duration metric: took 9.262892964s to restartPrimaryControlPlane
	I1013 15:43:53.108181 1881287 kubeadm.go:402] duration metric: took 9.346075744s to StartCluster
	I1013 15:43:53.108210 1881287 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:53.108336 1881287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:43:53.110574 1881287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:53.111002 1881287 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:43:53.111137 1881287 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 15:43:53.111274 1881287 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111277 1881287 config.go:182] Loaded profile config "default-k8s-diff-port-426789": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:53.111300 1881287 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-426789"
	W1013 15:43:53.111313 1881287 addons.go:247] addon storage-provisioner should already be in state true
	I1013 15:43:53.111324 1881287 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111339 1881287 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111346 1881287 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-426789"
	I1013 15:43:53.111350 1881287 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-426789"
	W1013 15:43:53.111359 1881287 addons.go:247] addon dashboard should already be in state true
	W1013 15:43:53.111360 1881287 addons.go:247] addon metrics-server should already be in state true
	I1013 15:43:53.111379 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.111387 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.111402 1881287 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111347 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.111445 1881287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-426789"
	I1013 15:43:53.111808 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.111805 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.111835 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.111848 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.111868 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.111964 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.112184 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.112238 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.115926 1881287 out.go:179] * Verifying Kubernetes components...
	I1013 15:43:53.117837 1881287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:53.131021 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
	I1013 15:43:53.131145 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I1013 15:43:53.131263 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44749
	I1013 15:43:53.131306 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1013 15:43:53.131780 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.131963 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.132182 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.132306 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.132328 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.132489 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.132502 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.132656 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.132786 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.132818 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.132923 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.132945 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.133266 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.133335 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.133352 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.133493 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.133868 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.133922 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.134084 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.134115 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.134175 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.135005 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.135097 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.138473 1881287 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-426789"
	W1013 15:43:53.138535 1881287 addons.go:247] addon default-storageclass should already be in state true
	I1013 15:43:53.138571 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.138951 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.138996 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.153375 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I1013 15:43:53.154086 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I1013 15:43:53.154354 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.154973 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.155287 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.155384 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.155522 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.155588 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.155980 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.156055 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.156311 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.156695 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.159943 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.160580 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.161397 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I1013 15:43:53.161596 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I1013 15:43:53.162371 1881287 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 15:43:53.162442 1881287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 15:43:53.162491 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.162623 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.163108 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.163158 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.163241 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.163269 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.163621 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.163868 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.163948 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.164392 1881287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:43:53.164414 1881287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 15:43:53.164436 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.164610 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.164680 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.165704 1881287 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 15:43:53.167086 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 15:43:53.167111 1881287 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 15:43:53.167145 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.167519 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.169405 1881287 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1013 15:43:53.170806 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 15:43:53.170839 1881287 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 15:43:53.170868 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.170970 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.172904 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.172958 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.173486 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.174763 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.175298 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.175869 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:53.177546 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.178363 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.179072 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.179191 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.179380 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.179403 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.179451 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.179501 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.179539 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.179550 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.179763 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.179830 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.179923 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:53.180049 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:53.188031 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I1013 15:43:53.188746 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.189369 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.189391 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.189889 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.190124 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.192665 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.192993 1881287 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 15:43:53.193015 1881287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 15:43:53.193041 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.197517 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.198127 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.198171 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.198708 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.198952 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.199191 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.199425 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:54.398978 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Getting to WaitForSSH function...
	I1013 15:43:54.402868 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.403485 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.403522 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.403692 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH client type: external
	I1013 15:43:54.403735 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa (-rw-------)
	I1013 15:43:54.403786 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:43:54.403800 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | About to run SSH command:
	I1013 15:43:54.403823 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | exit 0
	I1013 15:43:54.544257 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | SSH cmd err, output: <nil>: 
	I1013 15:43:54.544730 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetConfigRaw
	I1013 15:43:54.545413 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:54.549394 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.550047 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.550090 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.550494 1881569 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/config.json ...
	I1013 15:43:54.550797 1881569 machine.go:93] provisionDockerMachine start ...
	I1013 15:43:54.550830 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:54.551132 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.554299 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.554707 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.554754 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.554943 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:54.555175 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.555424 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.555617 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:54.555946 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:54.556248 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:54.556260 1881569 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 15:43:54.688707 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1013 15:43:54.688778 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetMachineName
	I1013 15:43:54.689138 1881569 buildroot.go:166] provisioning hostname "newest-cni-400509"
	I1013 15:43:54.689168 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetMachineName
	I1013 15:43:54.689397 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.693596 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.694246 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.694300 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.694537 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:54.694811 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.695013 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.695198 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:54.695392 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:54.695702 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:54.695740 1881569 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-400509 && echo "newest-cni-400509" | sudo tee /etc/hostname
	I1013 15:43:54.834089 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400509
	
	I1013 15:43:54.834128 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.838142 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.838584 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.838632 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.839006 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:54.839287 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.839492 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.839694 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:54.840030 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:54.840291 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:54.840310 1881569 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-400509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-400509/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-400509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 15:43:54.976516 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 15:43:54.976554 1881569 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 15:43:54.976618 1881569 buildroot.go:174] setting up certificates
	I1013 15:43:54.976643 1881569 provision.go:84] configureAuth start
	I1013 15:43:54.976668 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetMachineName
	I1013 15:43:54.977165 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:54.981371 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.981937 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.981969 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.982449 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.986173 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.986658 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.986687 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.986975 1881569 provision.go:143] copyHostCerts
	I1013 15:43:54.987049 1881569 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem, removing ...
	I1013 15:43:54.987072 1881569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem
	I1013 15:43:54.987167 1881569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 15:43:54.987325 1881569 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem, removing ...
	I1013 15:43:54.987339 1881569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem
	I1013 15:43:54.987386 1881569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 15:43:54.987492 1881569 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem, removing ...
	I1013 15:43:54.987508 1881569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem
	I1013 15:43:54.987563 1881569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 15:43:54.987652 1881569 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.newest-cni-400509 san=[127.0.0.1 192.168.39.58 localhost minikube newest-cni-400509]
	I1013 15:43:56.105921 1881569 provision.go:177] copyRemoteCerts
	I1013 15:43:56.105986 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 15:43:56.106012 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.109883 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.110333 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.110378 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.110655 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.110940 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.111126 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.111313 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:56.204900 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 15:43:56.250950 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 15:43:56.289008 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 15:43:56.329429 1881569 provision.go:87] duration metric: took 1.352737429s to configureAuth
	I1013 15:43:56.329473 1881569 buildroot.go:189] setting minikube options for container-runtime
	I1013 15:43:56.329690 1881569 config.go:182] Loaded profile config "newest-cni-400509": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:56.329707 1881569 machine.go:96] duration metric: took 1.778889003s to provisionDockerMachine
	I1013 15:43:56.329732 1881569 start.go:293] postStartSetup for "newest-cni-400509" (driver="kvm2")
	I1013 15:43:56.329749 1881569 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 15:43:56.329787 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.330185 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 15:43:56.330228 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.334038 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.334514 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.334549 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.334786 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.335028 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.335223 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.335409 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:56.434835 1881569 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 15:43:56.440734 1881569 info.go:137] Remote host: Buildroot 2025.02
	I1013 15:43:56.440767 1881569 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 15:43:56.440835 1881569 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 15:43:56.440916 1881569 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem -> 18149272.pem in /etc/ssl/certs
	I1013 15:43:56.441040 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 15:43:56.459176 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:43:56.502925 1881569 start.go:296] duration metric: took 173.137045ms for postStartSetup
	I1013 15:43:56.502995 1881569 fix.go:56] duration metric: took 16.870835137s for fixHost
	I1013 15:43:56.503030 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.506452 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.506870 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.506935 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.507108 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.507367 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.507582 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.507785 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.508020 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:56.508247 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:56.508261 1881569 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 15:43:56.624915 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760370236.574388905
	
	I1013 15:43:56.624944 1881569 fix.go:216] guest clock: 1760370236.574388905
	I1013 15:43:56.624957 1881569 fix.go:229] Guest: 2025-10-13 15:43:56.574388905 +0000 UTC Remote: 2025-10-13 15:43:56.50300288 +0000 UTC m=+19.831043931 (delta=71.386025ms)
	I1013 15:43:56.625020 1881569 fix.go:200] guest clock delta is within tolerance: 71.386025ms
	I1013 15:43:56.625030 1881569 start.go:83] releasing machines lock for "newest-cni-400509", held for 16.992897063s
	I1013 15:43:56.625061 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.625392 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:56.628808 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.629195 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.629225 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.629541 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.630278 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.630480 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.630581 1881569 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 15:43:56.630650 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.630706 1881569 ssh_runner.go:195] Run: cat /version.json
	I1013 15:43:56.630755 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.635920 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.636466 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.636492 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.636511 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.636805 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.637052 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.637161 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.637177 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.637345 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.637508 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:56.637592 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.638223 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.638488 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.638658 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:53.506025 1881287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:43:53.552445 1881287 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-426789" to be "Ready" ...
	I1013 15:43:53.561765 1881287 node_ready.go:49] node "default-k8s-diff-port-426789" is "Ready"
	I1013 15:43:53.561797 1881287 node_ready.go:38] duration metric: took 9.308209ms for node "default-k8s-diff-port-426789" to be "Ready" ...
	I1013 15:43:53.561815 1881287 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:43:53.561875 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:53.620414 1881287 api_server.go:72] duration metric: took 509.358173ms to wait for apiserver process to appear ...
	I1013 15:43:53.620447 1881287 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:43:53.620471 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:53.648031 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 200:
	ok
	I1013 15:43:53.650864 1881287 api_server.go:141] control plane version: v1.34.1
	I1013 15:43:53.650897 1881287 api_server.go:131] duration metric: took 30.442085ms to wait for apiserver health ...
	I1013 15:43:53.650909 1881287 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:43:53.673424 1881287 system_pods.go:59] 8 kube-system pods found
	I1013 15:43:53.673472 1881287 system_pods.go:61] "coredns-66bc5c9577-7mm74" [a6965960-a658-468c-a225-0a99e4ee6d29] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:43:53.673485 1881287 system_pods.go:61] "etcd-default-k8s-diff-port-426789" [97d29e80-2aae-46cb-b01c-2c94280cd2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:43:53.673496 1881287 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-426789" [b6f928ae-7bf8-48a8-b3df-251e2c47c935] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:43:53.673507 1881287 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-426789" [fffd4380-39d1-482a-a943-ac4ce7f67a82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:43:53.673518 1881287 system_pods.go:61] "kube-proxy-2vt8l" [1bae3750-c6df-46d8-8b33-130e1773600a] Running
	I1013 15:43:53.673527 1881287 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-426789" [1cf8ece0-4fbc-4ab1-9ec8-d206af58f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:43:53.673540 1881287 system_pods.go:61] "metrics-server-746fcd58dc-mqvqg" [e7582897-ca82-4255-9bc3-8e9563b9e410] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:43:53.673549 1881287 system_pods.go:61] "storage-provisioner" [ff2ac22d-9091-4b0c-b7fd-0c2e3e7c0062] Running
	I1013 15:43:53.673559 1881287 system_pods.go:74] duration metric: took 22.641644ms to wait for pod list to return data ...
	I1013 15:43:53.673573 1881287 default_sa.go:34] waiting for default service account to be created ...
	I1013 15:43:53.685624 1881287 default_sa.go:45] found service account: "default"
	I1013 15:43:53.685669 1881287 default_sa.go:55] duration metric: took 12.081401ms for default service account to be created ...
	I1013 15:43:53.685695 1881287 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 15:43:53.703485 1881287 system_pods.go:86] 8 kube-system pods found
	I1013 15:43:53.703536 1881287 system_pods.go:89] "coredns-66bc5c9577-7mm74" [a6965960-a658-468c-a225-0a99e4ee6d29] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:43:53.703551 1881287 system_pods.go:89] "etcd-default-k8s-diff-port-426789" [97d29e80-2aae-46cb-b01c-2c94280cd2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:43:53.703563 1881287 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-426789" [b6f928ae-7bf8-48a8-b3df-251e2c47c935] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:43:53.703577 1881287 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-426789" [fffd4380-39d1-482a-a943-ac4ce7f67a82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:43:53.703585 1881287 system_pods.go:89] "kube-proxy-2vt8l" [1bae3750-c6df-46d8-8b33-130e1773600a] Running
	I1013 15:43:53.703592 1881287 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-426789" [1cf8ece0-4fbc-4ab1-9ec8-d206af58f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:43:53.703602 1881287 system_pods.go:89] "metrics-server-746fcd58dc-mqvqg" [e7582897-ca82-4255-9bc3-8e9563b9e410] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:43:53.703612 1881287 system_pods.go:89] "storage-provisioner" [ff2ac22d-9091-4b0c-b7fd-0c2e3e7c0062] Running
	I1013 15:43:53.703625 1881287 system_pods.go:126] duration metric: took 17.919545ms to wait for k8s-apps to be running ...
	I1013 15:43:53.703639 1881287 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 15:43:53.703708 1881287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 15:43:53.836388 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 15:43:53.847671 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:43:53.859317 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 15:43:53.859351 1881287 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 15:43:53.863118 1881287 system_svc.go:56] duration metric: took 159.468238ms WaitForService to wait for kubelet
	I1013 15:43:53.863156 1881287 kubeadm.go:586] duration metric: took 752.10936ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 15:43:53.863183 1881287 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:43:53.868102 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 15:43:53.868135 1881287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1013 15:43:53.876846 1881287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:43:53.876881 1881287 node_conditions.go:123] node cpu capacity is 2
	I1013 15:43:53.876895 1881287 node_conditions.go:105] duration metric: took 13.705749ms to run NodePressure ...
	I1013 15:43:53.876911 1881287 start.go:241] waiting for startup goroutines ...
	I1013 15:43:53.975801 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 15:43:53.975837 1881287 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 15:43:54.014372 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 15:43:54.014413 1881287 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 15:43:54.097966 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:43:54.098001 1881287 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 15:43:54.102029 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 15:43:54.102070 1881287 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 15:43:54.231798 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 15:43:54.231824 1881287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 15:43:54.279938 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:43:54.422682 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 15:43:54.422738 1881287 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 15:43:54.559022 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 15:43:54.559045 1881287 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 15:43:54.673642 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 15:43:54.673671 1881287 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 15:43:54.816125 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 15:43:54.816167 1881287 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 15:43:54.994488 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:43:54.994521 1881287 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 15:43:55.030337 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.193903867s)
	I1013 15:43:55.030400 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:55.030415 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:55.030809 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:55.030875 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:55.030890 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:55.030903 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:55.030915 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:55.031248 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:55.031256 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:55.031269 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:55.060389 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:55.060423 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:55.060934 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:55.060958 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:55.060959 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:55.140795 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:43:56.965227 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.117511004s)
	I1013 15:43:56.965299 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.965313 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.965682 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.965698 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:56.965701 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.965725 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.965735 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.966055 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.966089 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.982812 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.702823647s)
	I1013 15:43:56.982887 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.982902 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.983290 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.983313 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.983346 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.983354 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.983623 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.983642 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.983654 1881287 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-426789"
	I1013 15:43:57.358086 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.217241399s)
	I1013 15:43:57.358160 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:57.358174 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:57.358579 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:57.358599 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:57.358609 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:57.358631 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:57.358917 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:57.358932 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:57.358960 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:57.363260 1881287 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-426789 addons enable metrics-server
	
	I1013 15:43:57.365802 1881287 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1013 15:43:57.367317 1881287 addons.go:514] duration metric: took 4.256188456s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1013 15:43:57.367371 1881287 start.go:246] waiting for cluster config update ...
	I1013 15:43:57.367388 1881287 start.go:255] writing updated cluster config ...
	I1013 15:43:57.367791 1881287 ssh_runner.go:195] Run: rm -f paused
	I1013 15:43:57.378391 1881287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 15:43:57.391148 1881287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7mm74" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:43:56.747519 1881569 ssh_runner.go:195] Run: systemctl --version
	I1013 15:43:56.754883 1881569 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 15:43:56.762412 1881569 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 15:43:56.762502 1881569 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 15:43:56.786981 1881569 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 15:43:56.787012 1881569 start.go:495] detecting cgroup driver to use...
	I1013 15:43:56.787098 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 15:43:56.822198 1881569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 15:43:56.844111 1881569 docker.go:218] disabling cri-docker service (if available) ...
	I1013 15:43:56.844200 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 15:43:56.869650 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 15:43:56.890055 1881569 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 15:43:57.069567 1881569 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 15:43:57.320533 1881569 docker.go:234] disabling docker service ...
	I1013 15:43:57.320624 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 15:43:57.340325 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 15:43:57.358343 1881569 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 15:43:57.573206 1881569 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 15:43:57.752872 1881569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 15:43:57.778609 1881569 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 15:43:57.809437 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 15:43:57.825120 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 15:43:57.841470 1881569 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 15:43:57.841551 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 15:43:57.858777 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:57.874650 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 15:43:57.889338 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:57.905170 1881569 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 15:43:57.921541 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 15:43:57.937087 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 15:43:57.951733 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 15:43:57.967796 1881569 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 15:43:57.981546 1881569 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 15:43:57.981609 1881569 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 15:43:58.008790 1881569 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 15:43:58.024908 1881569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:58.218957 1881569 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 15:43:58.264961 1881569 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 15:43:58.265076 1881569 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:58.271878 1881569 retry.go:31] will retry after 1.359480351s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 15:43:59.632478 1881569 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:59.640017 1881569 start.go:563] Will wait 60s for crictl version
	I1013 15:43:59.640109 1881569 ssh_runner.go:195] Run: which crictl
	I1013 15:43:59.646533 1881569 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 15:43:59.704210 1881569 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 15:43:59.704321 1881569 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:59.745848 1881569 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:59.781571 1881569 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 15:43:59.783056 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:59.787259 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:59.787813 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:59.787850 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:59.788151 1881569 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 15:43:59.793319 1881569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:43:59.813808 1881569 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 15:43:59.815535 1881569 kubeadm.go:883] updating cluster {Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Sche
duledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 15:43:59.815759 1881569 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:43:59.815862 1881569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:59.858933 1881569 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:59.858960 1881569 containerd.go:534] Images already preloaded, skipping extraction
	I1013 15:43:59.859025 1881569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:59.900328 1881569 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:59.900362 1881569 cache_images.go:85] Images are preloaded, skipping loading
	I1013 15:43:59.900381 1881569 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.34.1 containerd true true} ...
	I1013 15:43:59.900516 1881569 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-400509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 15:43:59.900613 1881569 ssh_runner.go:195] Run: sudo crictl info
	I1013 15:43:59.950762 1881569 cni.go:84] Creating CNI manager for ""
	I1013 15:43:59.950793 1881569 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:59.950823 1881569 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 15:43:59.950864 1881569 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-400509 NodeName:newest-cni-400509 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 15:43:59.951043 1881569 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-400509"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 15:43:59.951135 1881569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 15:43:59.967876 1881569 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 15:43:59.967956 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 15:43:59.982916 1881569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1013 15:44:00.010237 1881569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 15:44:00.040144 1881569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1013 15:44:00.066386 1881569 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1013 15:44:00.071339 1881569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:44:00.090025 1881569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:44:00.252566 1881569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:44:00.303616 1881569 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509 for IP: 192.168.39.58
	I1013 15:44:00.303643 1881569 certs.go:195] generating shared ca certs ...
	I1013 15:44:00.303666 1881569 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:00.303875 1881569 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 15:44:00.303956 1881569 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 15:44:00.303979 1881569 certs.go:257] generating profile certs ...
	I1013 15:44:00.304150 1881569 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/client.key
	I1013 15:44:00.304227 1881569 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/apiserver.key.832cd03a
	I1013 15:44:00.304286 1881569 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/proxy-client.key
	I1013 15:44:00.304458 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem (1338 bytes)
	W1013 15:44:00.304508 1881569 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927_empty.pem, impossibly tiny 0 bytes
	I1013 15:44:00.304522 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 15:44:00.304562 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 15:44:00.304594 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 15:44:00.304628 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 15:44:00.304681 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:44:00.305582 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 15:44:00.349695 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 15:44:00.394423 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 15:44:00.453420 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 15:44:00.500378 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 15:44:00.553138 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 15:44:00.590334 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 15:44:00.630023 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 15:44:00.668829 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 15:44:00.712223 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem --> /usr/share/ca-certificates/1814927.pem (1338 bytes)
	I1013 15:44:00.752915 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /usr/share/ca-certificates/18149272.pem (1708 bytes)
	I1013 15:44:00.789877 1881569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 15:44:00.813337 1881569 ssh_runner.go:195] Run: openssl version
	I1013 15:44:00.821230 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1814927.pem && ln -fs /usr/share/ca-certificates/1814927.pem /etc/ssl/certs/1814927.pem"
	I1013 15:44:00.837532 1881569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1814927.pem
	I1013 15:44:00.843842 1881569 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 14:22 /usr/share/ca-certificates/1814927.pem
	I1013 15:44:00.843915 1881569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1814927.pem
	I1013 15:44:00.852403 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1814927.pem /etc/ssl/certs/51391683.0"
	I1013 15:44:00.868962 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18149272.pem && ln -fs /usr/share/ca-certificates/18149272.pem /etc/ssl/certs/18149272.pem"
	I1013 15:44:00.887762 1881569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18149272.pem
	I1013 15:44:00.895478 1881569 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 14:22 /usr/share/ca-certificates/18149272.pem
	I1013 15:44:00.895571 1881569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18149272.pem
	I1013 15:44:00.904610 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18149272.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 15:44:00.921509 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 15:44:00.940954 1881569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:44:00.947541 1881569 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:44:00.947630 1881569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:44:00.956030 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 15:44:00.974527 1881569 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 15:44:00.981332 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 15:44:00.992960 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 15:44:01.004003 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 15:44:01.012671 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 15:44:01.020681 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 15:44:01.028927 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 15:44:01.037647 1881569 kubeadm.go:400] StartCluster: {Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:44:01.037778 1881569 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 15:44:01.037843 1881569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:44:01.097948 1881569 cri.go:89] found id: "1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554"
	I1013 15:44:01.097981 1881569 cri.go:89] found id: "36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675"
	I1013 15:44:01.097988 1881569 cri.go:89] found id: "95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb"
	I1013 15:44:01.097993 1881569 cri.go:89] found id: "2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5"
	I1013 15:44:01.097997 1881569 cri.go:89] found id: "2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1"
	I1013 15:44:01.098002 1881569 cri.go:89] found id: "a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4"
	I1013 15:44:01.098006 1881569 cri.go:89] found id: "94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab"
	I1013 15:44:01.098010 1881569 cri.go:89] found id: "590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c"
	I1013 15:44:01.098014 1881569 cri.go:89] found id: ""
	I1013 15:44:01.098075 1881569 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1013 15:44:01.122443 1881569 kubeadm.go:407] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T15:44:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1013 15:44:01.122587 1881569 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 15:44:01.144393 1881569 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 15:44:01.144424 1881569 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 15:44:01.144489 1881569 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 15:44:01.159059 1881569 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 15:44:01.160097 1881569 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-400509" does not appear in /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:44:01.160849 1881569 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-1810975/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-400509" cluster setting kubeconfig missing "newest-cni-400509" context setting]
	I1013 15:44:01.162117 1881569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:01.164324 1881569 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 15:44:01.182868 1881569 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.58
	I1013 15:44:01.182912 1881569 kubeadm.go:1160] stopping kube-system containers ...
	I1013 15:44:01.182929 1881569 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1013 15:44:01.183008 1881569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:44:01.236181 1881569 cri.go:89] found id: "1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554"
	I1013 15:44:01.236210 1881569 cri.go:89] found id: "36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675"
	I1013 15:44:01.236217 1881569 cri.go:89] found id: "95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb"
	I1013 15:44:01.236223 1881569 cri.go:89] found id: "2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5"
	I1013 15:44:01.236228 1881569 cri.go:89] found id: "2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1"
	I1013 15:44:01.236233 1881569 cri.go:89] found id: "a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4"
	I1013 15:44:01.236237 1881569 cri.go:89] found id: "94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab"
	I1013 15:44:01.236241 1881569 cri.go:89] found id: "590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c"
	I1013 15:44:01.236245 1881569 cri.go:89] found id: ""
	I1013 15:44:01.236272 1881569 cri.go:252] Stopping containers: [1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554 36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675 95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb 2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5 2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1 a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4 94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab 590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c]
	I1013 15:44:01.236375 1881569 ssh_runner.go:195] Run: which crictl
	I1013 15:44:01.241802 1881569 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554 36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675 95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb 2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5 2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1 a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4 94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab 590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c
	I1013 15:44:01.290389 1881569 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1013 15:44:01.314882 1881569 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 15:44:01.329255 1881569 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 15:44:01.329305 1881569 kubeadm.go:157] found existing configuration files:
	
	I1013 15:44:01.329373 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 15:44:01.341956 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 15:44:01.342028 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 15:44:01.355841 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 15:44:01.368810 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 15:44:01.368903 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 15:44:01.382268 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 15:44:01.396472 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 15:44:01.396552 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 15:44:01.412562 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 15:44:01.426123 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 15:44:01.426188 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 15:44:01.442585 1881569 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 15:44:01.460493 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:01.611108 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	W1013 15:43:59.400593 1881287 pod_ready.go:104] pod "coredns-66bc5c9577-7mm74" is not "Ready", error: <nil>
	W1013 15:44:01.404013 1881287 pod_ready.go:104] pod "coredns-66bc5c9577-7mm74" is not "Ready", error: <nil>
	I1013 15:44:02.909951 1881287 pod_ready.go:94] pod "coredns-66bc5c9577-7mm74" is "Ready"
	I1013 15:44:02.909990 1881287 pod_ready.go:86] duration metric: took 5.518800662s for pod "coredns-66bc5c9577-7mm74" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:02.913489 1881287 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:02.919647 1881287 pod_ready.go:94] pod "etcd-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:02.919678 1881287 pod_ready.go:86] duration metric: took 6.161871ms for pod "etcd-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:02.928092 1881287 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.438075 1881287 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:04.438113 1881287 pod_ready.go:86] duration metric: took 1.509988538s for pod "kube-apiserver-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.442872 1881287 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.451602 1881287 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:04.451645 1881287 pod_ready.go:86] duration metric: took 8.73711ms for pod "kube-controller-manager-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.497031 1881287 pod_ready.go:83] waiting for pod "kube-proxy-2vt8l" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.897578 1881287 pod_ready.go:94] pod "kube-proxy-2vt8l" is "Ready"
	I1013 15:44:04.897618 1881287 pod_ready.go:86] duration metric: took 400.546183ms for pod "kube-proxy-2vt8l" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:05.096440 1881287 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:05.496577 1881287 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:05.496616 1881287 pod_ready.go:86] duration metric: took 400.135912ms for pod "kube-scheduler-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:05.496664 1881287 pod_ready.go:40] duration metric: took 8.118190331s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 15:44:05.552871 1881287 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 15:44:05.554860 1881287 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-426789" cluster and "default" namespace by default
	I1013 15:44:02.860183 1881569 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.249017124s)
	I1013 15:44:02.860277 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:03.168409 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:03.257048 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:03.348980 1881569 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:44:03.349102 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:03.849619 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:04.350010 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:04.849274 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:04.888091 1881569 api_server.go:72] duration metric: took 1.539128472s to wait for apiserver process to appear ...
	I1013 15:44:04.888128 1881569 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:44:04.888157 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:04.888817 1881569 api_server.go:269] stopped: https://192.168.39.58:8443/healthz: Get "https://192.168.39.58:8443/healthz": dial tcp 192.168.39.58:8443: connect: connection refused
	I1013 15:44:05.388397 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:07.970700 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:44:07.970755 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:44:07.970773 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:08.014873 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:44:08.014906 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:44:08.388242 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:08.394684 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:08.394733 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:08.888394 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:08.898015 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:08.898049 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:09.388508 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:09.394367 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:09.394400 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:09.888304 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:09.895427 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:09.895462 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:10.389244 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:10.396050 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1013 15:44:10.404568 1881569 api_server.go:141] control plane version: v1.34.1
	I1013 15:44:10.404611 1881569 api_server.go:131] duration metric: took 5.516473663s to wait for apiserver health ...
	I1013 15:44:10.404626 1881569 cni.go:84] Creating CNI manager for ""
	I1013 15:44:10.404634 1881569 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:44:10.406752 1881569 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 15:44:10.408371 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 15:44:10.423786 1881569 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 15:44:10.455726 1881569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:44:10.462697 1881569 system_pods.go:59] 9 kube-system pods found
	I1013 15:44:10.462753 1881569 system_pods.go:61] "coredns-66bc5c9577-bjq5v" [91a9af9a-e41a-4318-81d9-f7d51fe95004] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:10.462769 1881569 system_pods.go:61] "coredns-66bc5c9577-mbvz8" [3bd6fcbc-f1cd-4996-9cc5-af429ec54d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:10.462780 1881569 system_pods.go:61] "etcd-newest-cni-400509" [ea2910a6-f7b1-41c0-89b2-be41f742a959] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:44:10.462790 1881569 system_pods.go:61] "kube-apiserver-newest-cni-400509" [1837ba3d-de07-4dd0-9cb3-0ad36c5da82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:44:10.462802 1881569 system_pods.go:61] "kube-controller-manager-newest-cni-400509" [b38e0595-92d4-4723-a550-02b3567fa410] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:44:10.462808 1881569 system_pods.go:61] "kube-proxy-w5j92" [f2b6880d-90c5-484d-84cc-6f657d38179d] Running
	I1013 15:44:10.462815 1881569 system_pods.go:61] "kube-scheduler-newest-cni-400509" [f55dcdac-6629-48f5-ab8b-fff90f5196aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:44:10.462842 1881569 system_pods.go:61] "metrics-server-746fcd58dc-nnvx9" [836f9d73-0cde-4dea-9bff-f6ac345cadc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:44:10.462847 1881569 system_pods.go:61] "storage-provisioner" [6557f44c-4238-4b21-b5e5-2ef2cb2c554c] Running
	I1013 15:44:10.462855 1881569 system_pods.go:74] duration metric: took 7.102704ms to wait for pod list to return data ...
	I1013 15:44:10.462869 1881569 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:44:10.467505 1881569 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:44:10.467542 1881569 node_conditions.go:123] node cpu capacity is 2
	I1013 15:44:10.467556 1881569 node_conditions.go:105] duration metric: took 4.682317ms to run NodePressure ...
	I1013 15:44:10.467610 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:10.762255 1881569 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 15:44:10.780389 1881569 ops.go:34] apiserver oom_adj: -16
	I1013 15:44:10.780421 1881569 kubeadm.go:601] duration metric: took 9.635988482s to restartPrimaryControlPlane
	I1013 15:44:10.780437 1881569 kubeadm.go:402] duration metric: took 9.742806388s to StartCluster
	I1013 15:44:10.780475 1881569 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:10.780589 1881569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:44:10.782504 1881569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:10.782808 1881569 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:44:10.782888 1881569 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 15:44:10.783000 1881569 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-400509"
	I1013 15:44:10.783025 1881569 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-400509"
	W1013 15:44:10.783033 1881569 addons.go:247] addon storage-provisioner should already be in state true
	I1013 15:44:10.783032 1881569 addons.go:69] Setting default-storageclass=true in profile "newest-cni-400509"
	I1013 15:44:10.783057 1881569 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-400509"
	I1013 15:44:10.783065 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	I1013 15:44:10.783066 1881569 addons.go:69] Setting metrics-server=true in profile "newest-cni-400509"
	I1013 15:44:10.783090 1881569 addons.go:69] Setting dashboard=true in profile "newest-cni-400509"
	I1013 15:44:10.783117 1881569 addons.go:238] Setting addon metrics-server=true in "newest-cni-400509"
	I1013 15:44:10.783123 1881569 addons.go:238] Setting addon dashboard=true in "newest-cni-400509"
	W1013 15:44:10.783132 1881569 addons.go:247] addon dashboard should already be in state true
	I1013 15:44:10.783147 1881569 config.go:182] Loaded profile config "newest-cni-400509": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:44:10.783174 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	W1013 15:44:10.783132 1881569 addons.go:247] addon metrics-server should already be in state true
	I1013 15:44:10.783246 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	I1013 15:44:10.783508 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783559 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.783583 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783505 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783614 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.783640 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.783648 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783670 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.784368 1881569 out.go:179] * Verifying Kubernetes components...
	I1013 15:44:10.785756 1881569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:44:10.800271 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37791
	I1013 15:44:10.800271 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I1013 15:44:10.801032 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.801109 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I1013 15:44:10.801246 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.801506 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.801929 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.801955 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.802056 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.802082 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.802110 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I1013 15:44:10.802430 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.802455 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.802480 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.802460 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.802674 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.803138 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.803158 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.803208 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.803230 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.803443 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.803454 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.803467 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.803920 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.804033 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.804083 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.804124 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.812531 1881569 addons.go:238] Setting addon default-storageclass=true in "newest-cni-400509"
	W1013 15:44:10.812560 1881569 addons.go:247] addon default-storageclass should already be in state true
	I1013 15:44:10.812594 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	I1013 15:44:10.812997 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.813066 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.820690 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I1013 15:44:10.821988 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.822645 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.822687 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.823210 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.823487 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.827289 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.829099 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1013 15:44:10.829669 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.829812 1881569 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 15:44:10.830088 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I1013 15:44:10.830259 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.830280 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.830669 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.830818 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.830868 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.831364 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.831385 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.832151 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I1013 15:44:10.832239 1881569 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 15:44:10.832197 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.832793 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.832793 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.833231 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.833272 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.833297 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.833471 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 15:44:10.833488 1881569 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 15:44:10.833508 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.833970 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.834643 1881569 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1013 15:44:10.834786 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.834839 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.835807 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.837731 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.838271 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.838321 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.838595 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.838792 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.838994 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.839128 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:10.839520 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 15:44:10.839547 1881569 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 15:44:10.839574 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.840359 1881569 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 15:44:10.841784 1881569 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:44:10.841804 1881569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 15:44:10.841825 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.844531 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.845501 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.845570 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.845952 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.846206 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.846484 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.846861 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:10.847137 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.847628 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.847850 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.848261 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.848469 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.848657 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.848992 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:10.853772 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I1013 15:44:10.854204 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.854681 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.854698 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.855059 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.855327 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.857412 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.857679 1881569 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 15:44:10.857694 1881569 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 15:44:10.857728 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.861587 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.861994 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.862021 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.862318 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.862498 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.862640 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.862796 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:11.065604 1881569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:44:11.089626 1881569 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:44:11.089733 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:11.110889 1881569 api_server.go:72] duration metric: took 328.043615ms to wait for apiserver process to appear ...
	I1013 15:44:11.110921 1881569 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:44:11.110945 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:11.116791 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1013 15:44:11.117887 1881569 api_server.go:141] control plane version: v1.34.1
	I1013 15:44:11.117919 1881569 api_server.go:131] duration metric: took 6.988921ms to wait for apiserver health ...
	I1013 15:44:11.117931 1881569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:44:11.127122 1881569 system_pods.go:59] 9 kube-system pods found
	I1013 15:44:11.127169 1881569 system_pods.go:61] "coredns-66bc5c9577-bjq5v" [91a9af9a-e41a-4318-81d9-f7d51fe95004] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:11.127186 1881569 system_pods.go:61] "coredns-66bc5c9577-mbvz8" [3bd6fcbc-f1cd-4996-9cc5-af429ec54d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:11.127195 1881569 system_pods.go:61] "etcd-newest-cni-400509" [ea2910a6-f7b1-41c0-89b2-be41f742a959] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:44:11.127208 1881569 system_pods.go:61] "kube-apiserver-newest-cni-400509" [1837ba3d-de07-4dd0-9cb3-0ad36c5da82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:44:11.127214 1881569 system_pods.go:61] "kube-controller-manager-newest-cni-400509" [b38e0595-92d4-4723-a550-02b3567fa410] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:44:11.127218 1881569 system_pods.go:61] "kube-proxy-w5j92" [f2b6880d-90c5-484d-84cc-6f657d38179d] Running
	I1013 15:44:11.127223 1881569 system_pods.go:61] "kube-scheduler-newest-cni-400509" [f55dcdac-6629-48f5-ab8b-fff90f5196aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:44:11.127228 1881569 system_pods.go:61] "metrics-server-746fcd58dc-nnvx9" [836f9d73-0cde-4dea-9bff-f6ac345cadc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:44:11.127231 1881569 system_pods.go:61] "storage-provisioner" [6557f44c-4238-4b21-b5e5-2ef2cb2c554c] Running
	I1013 15:44:11.127241 1881569 system_pods.go:74] duration metric: took 9.299922ms to wait for pod list to return data ...
	I1013 15:44:11.127267 1881569 default_sa.go:34] waiting for default service account to be created ...
	I1013 15:44:11.131642 1881569 default_sa.go:45] found service account: "default"
	I1013 15:44:11.131672 1881569 default_sa.go:55] duration metric: took 4.396286ms for default service account to be created ...
	I1013 15:44:11.131689 1881569 kubeadm.go:586] duration metric: took 348.849317ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 15:44:11.131723 1881569 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:44:11.135748 1881569 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:44:11.135781 1881569 node_conditions.go:123] node cpu capacity is 2
	I1013 15:44:11.135795 1881569 node_conditions.go:105] duration metric: took 4.065136ms to run NodePressure ...
	I1013 15:44:11.135809 1881569 start.go:241] waiting for startup goroutines ...
	I1013 15:44:11.297679 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 15:44:11.297704 1881569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1013 15:44:11.302366 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 15:44:11.302395 1881569 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 15:44:11.328126 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:44:11.336312 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 15:44:11.390077 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 15:44:11.390113 1881569 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 15:44:11.401349 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 15:44:11.401380 1881569 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 15:44:11.487081 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 15:44:11.487113 1881569 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 15:44:11.514896 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:44:11.514927 1881569 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 15:44:11.548697 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 15:44:11.548735 1881569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 15:44:11.576084 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:44:11.638992 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 15:44:11.639025 1881569 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 15:44:11.739144 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 15:44:11.739177 1881569 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 15:44:11.851415 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 15:44:11.851451 1881569 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 15:44:11.964190 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 15:44:11.964227 1881569 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 15:44:12.151581 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:44:12.151616 1881569 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 15:44:12.348324 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:44:14.548429 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.212077572s)
	I1013 15:44:14.548509 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.548523 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.548612 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.22045241s)
	I1013 15:44:14.548643 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.548655 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.548889 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.548910 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.548922 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.548931 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.549013 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.549064 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.549083 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.549102 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.549113 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.549247 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.549260 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.549515 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.549546 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.549552 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.590958 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.590989 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.591387 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.591401 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.591419 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.690046 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.113908538s)
	I1013 15:44:14.690105 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.690120 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.690573 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.690605 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.690622 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.690634 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.690650 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.690904 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.690936 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.690957 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.690981 1881569 addons.go:479] Verifying addon metrics-server=true in "newest-cni-400509"
	I1013 15:44:15.069622 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.721227304s)
	I1013 15:44:15.069689 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:15.069705 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:15.070241 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:15.070270 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:15.070282 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:15.070295 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:15.070301 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:15.070572 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:15.070587 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:15.074390 1881569 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-400509 addons enable metrics-server
	
	I1013 15:44:15.076426 1881569 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1013 15:44:15.077979 1881569 addons.go:514] duration metric: took 4.295084518s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1013 15:44:15.078038 1881569 start.go:246] waiting for cluster config update ...
	I1013 15:44:15.078071 1881569 start.go:255] writing updated cluster config ...
	I1013 15:44:15.078443 1881569 ssh_runner.go:195] Run: rm -f paused
	I1013 15:44:15.144611 1881569 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 15:44:15.146748 1881569 out.go:179] * Done! kubectl is now configured to use "newest-cni-400509" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	a012e2ab8913f       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   8                   80d4daeefd3f6       dashboard-metrics-scraper-6ffb444bf9-6v4dm
	a1eeedac0325f       6e38f40d628db       17 minutes ago       Running             storage-provisioner         3                   48a056cd7065e       storage-provisioner
	5c2c9b6372899       52546a367cc9e       18 minutes ago       Running             coredns                     1                   e92c76fb8e45e       coredns-66bc5c9577-rmhlp
	4c38adb34c612       56cc512116c8f       18 minutes ago       Running             busybox                     1                   f6ae93be9ab01       busybox
	034ea310c76d5       fc25172553d79       18 minutes ago       Running             kube-proxy                  1                   e0e58aa347e2c       kube-proxy-qlfhm
	e93a05bb96f31       6e38f40d628db       18 minutes ago       Exited              storage-provisioner         2                   48a056cd7065e       storage-provisioner
	30195cdffd020       5f1f5298c888d       18 minutes ago       Running             etcd                        1                   440fc426ed820       etcd-embed-certs-516717
	19ae15867a847       7dd6aaa1717ab       18 minutes ago       Running             kube-scheduler              1                   0ade695d1e97c       kube-scheduler-embed-certs-516717
	253c31b6993f1       c80c8dbafe7dd       18 minutes ago       Running             kube-controller-manager     1                   e287ad4b2e531       kube-controller-manager-embed-certs-516717
	64693c2aa9a7a       c3994bc696102       18 minutes ago       Running             kube-apiserver              1                   886273bfdf7ad       kube-apiserver-embed-certs-516717
	c24ee29935db3       56cc512116c8f       20 minutes ago       Exited              busybox                     0                   7222d29395163       busybox
	3e9260910496e       52546a367cc9e       20 minutes ago       Exited              coredns                     0                   feeba6515ec1b       coredns-66bc5c9577-rmhlp
	f6d977cc58b31       fc25172553d79       20 minutes ago       Exited              kube-proxy                  0                   14fcc1ab00813       kube-proxy-qlfhm
	8a6eeb04ec582       7dd6aaa1717ab       21 minutes ago       Exited              kube-scheduler              0                   f3ed40a9ebdb6       kube-scheduler-embed-certs-516717
	5324240631f01       5f1f5298c888d       21 minutes ago       Exited              etcd                        0                   50cd8b9208e1e       etcd-embed-certs-516717
	a16aad8a0a4ea       c80c8dbafe7dd       21 minutes ago       Exited              kube-controller-manager     0                   d95879f894097       kube-controller-manager-embed-certs-516717
	d35b2999e2920       c3994bc696102       21 minutes ago       Exited              kube-apiserver              0                   796cb1205a263       kube-apiserver-embed-certs-516717
	
	
	==> containerd <==
	Oct 13 15:43:21 embed-certs-516717 containerd[721]: time="2025-10-13T15:43:21.472678150Z" level=info msg="StartContainer for \"abee20a0c747bd8623d97a56789ea590c2ea981580fd662a424c8f92e47158b6\""
	Oct 13 15:43:21 embed-certs-516717 containerd[721]: time="2025-10-13T15:43:21.556991066Z" level=info msg="StartContainer for \"abee20a0c747bd8623d97a56789ea590c2ea981580fd662a424c8f92e47158b6\" returns successfully"
	Oct 13 15:43:21 embed-certs-516717 containerd[721]: time="2025-10-13T15:43:21.611651732Z" level=info msg="shim disconnected" id=abee20a0c747bd8623d97a56789ea590c2ea981580fd662a424c8f92e47158b6 namespace=k8s.io
	Oct 13 15:43:21 embed-certs-516717 containerd[721]: time="2025-10-13T15:43:21.611704496Z" level=warning msg="cleaning up after shim disconnected" id=abee20a0c747bd8623d97a56789ea590c2ea981580fd662a424c8f92e47158b6 namespace=k8s.io
	Oct 13 15:43:21 embed-certs-516717 containerd[721]: time="2025-10-13T15:43:21.611718038Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:43:22 embed-certs-516717 containerd[721]: time="2025-10-13T15:43:22.053478473Z" level=info msg="RemoveContainer for \"1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc\""
	Oct 13 15:43:22 embed-certs-516717 containerd[721]: time="2025-10-13T15:43:22.061249589Z" level=info msg="RemoveContainer for \"1e2ca24113eacbf90e9fca7ecb4eeb33314782a43627e35c1ab466aa3a6576fc\" returns successfully"
	Oct 13 15:48:04 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:04.430424280Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 15:48:04 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:04.436698455Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:48:04 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:04.536502637Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:48:04 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:04.642608373Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 15:48:04 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:04.642690279Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 13 15:48:12 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:12.428139801Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 13 15:48:12 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:12.433116263Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Oct 13 15:48:12 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:12.435522590Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Oct 13 15:48:12 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:12.435652903Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 13 15:48:30 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:30.434239077Z" level=info msg="CreateContainer within sandbox \"80d4daeefd3f6b9e1b7f1b7176dc8514a204217b4515f7d21d5c10d6db327475\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Oct 13 15:48:30 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:30.465793523Z" level=info msg="CreateContainer within sandbox \"80d4daeefd3f6b9e1b7f1b7176dc8514a204217b4515f7d21d5c10d6db327475\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f\""
	Oct 13 15:48:30 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:30.467626247Z" level=info msg="StartContainer for \"a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f\""
	Oct 13 15:48:30 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:30.549897101Z" level=info msg="StartContainer for \"a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f\" returns successfully"
	Oct 13 15:48:30 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:30.602401350Z" level=info msg="shim disconnected" id=a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f namespace=k8s.io
	Oct 13 15:48:30 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:30.602641834Z" level=warning msg="cleaning up after shim disconnected" id=a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f namespace=k8s.io
	Oct 13 15:48:30 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:30.602740546Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:48:31 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:31.101744144Z" level=info msg="RemoveContainer for \"abee20a0c747bd8623d97a56789ea590c2ea981580fd662a424c8f92e47158b6\""
	Oct 13 15:48:31 embed-certs-516717 containerd[721]: time="2025-10-13T15:48:31.108759751Z" level=info msg="RemoveContainer for \"abee20a0c747bd8623d97a56789ea590c2ea981580fd662a424c8f92e47158b6\" returns successfully"
	
	
	==> coredns [3e9260910496e46f9f0c111e0059c1b373d41c5cdde09da39ee51382040eaf23] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53173 - 26862 "HINFO IN 1089811145681660908.3981688596191647616. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041911236s
	
	
	==> coredns [5c2c9b6372899c44edae22b6cbdc9827e04d6faf9308b6eb5c4004430a47509b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58225 - 828 "HINFO IN 1646723403474265242.2092770015904884699. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.442781572s
	
	
	==> describe nodes <==
	Name:               embed-certs-516717
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-516717
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=embed-certs-516717
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T15_29_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 15:29:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-516717
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 15:50:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 15:48:44 +0000   Mon, 13 Oct 2025 15:29:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 15:48:44 +0000   Mon, 13 Oct 2025 15:29:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 15:48:44 +0000   Mon, 13 Oct 2025 15:29:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 15:48:44 +0000   Mon, 13 Oct 2025 15:32:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.104
	  Hostname:    embed-certs-516717
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c44e321cdeb4ff5be4320e6af8af446
	  System UUID:                9c44e321-cdeb-4ff5-be43-20e6af8af446
	  Boot ID:                    b3404ab9-a97a-4475-a450-eca21836404e
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-rmhlp                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-embed-certs-516717                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-516717             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-516717    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-qlfhm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-516717             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-746fcd58dc-qp476               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6v4dm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v4zfv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 20m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node embed-certs-516717 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-516717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                kubelet          Node embed-certs-516717 status is now: NodeHasSufficientPID
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeReady                21m                kubelet          Node embed-certs-516717 status is now: NodeReady
	  Normal   RegisteredNode           21m                node-controller  Node embed-certs-516717 event: Registered Node embed-certs-516717 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-516717 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-516717 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-516717 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18m                kubelet          Node embed-certs-516717 has been rebooted, boot id: b3404ab9-a97a-4475-a450-eca21836404e
	  Normal   RegisteredNode           18m                node-controller  Node embed-certs-516717 event: Registered Node embed-certs-516717 in Controller
	
	
	==> dmesg <==
	[Oct13 15:31] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000066] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002500] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.722647] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.113075] kauditd_printk_skb: 85 callbacks suppressed
	[  +1.495364] kauditd_printk_skb: 184 callbacks suppressed
	[Oct13 15:32] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.756221] kauditd_printk_skb: 161 callbacks suppressed
	[  +1.564700] kauditd_printk_skb: 203 callbacks suppressed
	[  +2.734520] kauditd_printk_skb: 47 callbacks suppressed
	[ +13.637386] kauditd_printk_skb: 6 callbacks suppressed
	[ +16.008528] kauditd_printk_skb: 7 callbacks suppressed
	[Oct13 15:33] kauditd_printk_skb: 5 callbacks suppressed
	[ +46.995379] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:35] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:38] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:43] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:48] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [30195cdffd02082b7047e0c85252c7e56a0060292e9ebf661b6cd944d9330f5d] <==
	{"level":"warn","ts":"2025-10-13T15:32:02.267324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.276860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.310686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.332433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.343764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:32:02.410674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:40:59.764289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.701182ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:40:59.765368Z","caller":"traceutil/trace.go:172","msg":"trace[1376892760] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1253; }","duration":"205.879941ms","start":"2025-10-13T15:40:59.559437Z","end":"2025-10-13T15:40:59.765317Z","steps":["trace[1376892760] 'range keys from in-memory index tree'  (duration: 204.648562ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:41:05.629395Z","caller":"traceutil/trace.go:172","msg":"trace[646268734] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"155.227782ms","start":"2025-10-13T15:41:05.474137Z","end":"2025-10-13T15:41:05.629365Z","steps":["trace[646268734] 'process raft request'  (duration: 155.075037ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:42:01.288809Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1046}
	{"level":"info","ts":"2025-10-13T15:42:01.317779Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1046,"took":"27.882798ms","hash":125718119,"current-db-size-bytes":3170304,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1294336,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-13T15:42:01.317845Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":125718119,"revision":1046,"compact-revision":-1}
	{"level":"info","ts":"2025-10-13T15:43:10.687613Z","caller":"traceutil/trace.go:172","msg":"trace[1177254777] linearizableReadLoop","detail":"{readStateIndex:1543; appliedIndex:1543; }","duration":"184.696703ms","start":"2025-10-13T15:43:10.502813Z","end":"2025-10-13T15:43:10.687509Z","steps":["trace[1177254777] 'read index received'  (duration: 184.689146ms)","trace[1177254777] 'applied index is now lower than readState.Index'  (duration: 6.34µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T15:43:10.688133Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.187036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:43:10.688183Z","caller":"traceutil/trace.go:172","msg":"trace[2049623275] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1364; }","duration":"185.365419ms","start":"2025-10-13T15:43:10.502807Z","end":"2025-10-13T15:43:10.688172Z","steps":["trace[2049623275] 'agreement among raft nodes before linearized reading'  (duration: 185.145512ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:43:10.688987Z","caller":"traceutil/trace.go:172","msg":"trace[970645627] transaction","detail":"{read_only:false; response_revision:1365; number_of_response:1; }","duration":"242.935623ms","start":"2025-10-13T15:43:10.445913Z","end":"2025-10-13T15:43:10.688849Z","steps":["trace[970645627] 'process raft request'  (duration: 242.597019ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:43:10.690044Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.65924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2025-10-13T15:43:10.690767Z","caller":"traceutil/trace.go:172","msg":"trace[565598913] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1365; }","duration":"183.424214ms","start":"2025-10-13T15:43:10.507326Z","end":"2025-10-13T15:43:10.690750Z","steps":["trace[565598913] 'agreement among raft nodes before linearized reading'  (duration: 181.314966ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:43:10.690726Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.853899ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T15:43:10.691965Z","caller":"traceutil/trace.go:172","msg":"trace[1472272108] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1365; }","duration":"133.092393ms","start":"2025-10-13T15:43:10.558864Z","end":"2025-10-13T15:43:10.691956Z","steps":["trace[1472272108] 'agreement among raft nodes before linearized reading'  (duration: 131.845873ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:43:11.160966Z","caller":"traceutil/trace.go:172","msg":"trace[1790049430] transaction","detail":"{read_only:false; response_revision:1366; number_of_response:1; }","duration":"459.139784ms","start":"2025-10-13T15:43:10.701811Z","end":"2025-10-13T15:43:11.160951Z","steps":["trace[1790049430] 'process raft request'  (duration: 453.013206ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T15:43:11.163088Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-13T15:43:10.701789Z","time spent":"459.302061ms","remote":"127.0.0.1:51414","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1364 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-10-13T15:47:01.296237Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1302}
	{"level":"info","ts":"2025-10-13T15:47:01.302408Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1302,"took":"4.954875ms","hash":382428784,"current-db-size-bytes":3170304,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1781760,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-10-13T15:47:01.302786Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":382428784,"revision":1302,"compact-revision":1046}
	
	
	==> etcd [5324240631f0124ec67ecac97c2f41c9450cd94c9b5cf7b963229f7309505980] <==
	{"level":"warn","ts":"2025-10-13T15:29:14.211822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.223256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.241896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.250471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.268566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.276180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.297987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.317165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.323784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.336140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.351093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.364462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.375651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.389820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.408129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.425708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.440898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.453833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.474213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.485611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.501971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.515719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.525842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.535503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:29:14.631437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35926","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:50:25 up 18 min,  0 users,  load average: 0.07, 0.10, 0.09
	Linux embed-certs-516717 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [64693c2aa9a7a7e7ce82c85685b50d56b40f62d945052f36e56c2bf1a75e2340] <==
	E1013 15:47:04.387487       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:47:04.387499       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1013 15:47:04.387530       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:47:04.388756       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:48:04.387619       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:48:04.388178       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:48:04.388231       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:48:04.389588       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:48:04.389624       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:48:04.389648       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:50:04.389374       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:50:04.389680       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:50:04.389718       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:50:04.389787       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:50:04.389811       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:50:04.391506       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d35b2999e2920b182c31a06864472634271623f1ed67c5ee3fada7fc56276d8f] <==
	I1013 15:29:18.641266       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 15:29:18.691890       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 15:29:24.378999       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 15:29:24.404907       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 15:29:24.482511       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 15:29:24.554954       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1013 15:30:09.729988       1 conn.go:339] Error on socket receive: read tcp 192.168.72.104:8443->192.168.72.1:36100: use of closed network connection
	I1013 15:30:10.497167       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1013 15:30:10.511257       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:30:10.511514       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1013 15:30:10.511751       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1013 15:30:10.701606       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.100.175.174"}
	W1013 15:30:10.713902       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:30:10.714602       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1013 15:30:10.737041       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:30:10.737122       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [253c31b6993f10c24713a2cdee3e3f43eab29fa6059b115ba92dcf14fd7bbf21] <==
	I1013 15:44:09.676813       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:44:39.519933       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:44:39.688588       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:45:09.526091       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:45:09.698306       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:45:39.532800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:45:39.709862       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:46:09.541317       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:46:09.722427       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:46:39.548344       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:46:39.733457       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:47:09.555809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:47:09.743214       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:47:39.563464       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:47:39.752768       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:48:09.570271       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:48:09.762585       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:48:39.577081       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:48:39.772897       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:49:09.582936       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:49:09.784484       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:49:39.590563       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:49:39.793846       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:50:09.596325       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:50:09.802539       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [a16aad8a0a4ea4024ab693deeee7eb7f373d8299630cbe16ddfcb4eacba83924] <==
	I1013 15:29:23.445544       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 15:29:23.445770       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 15:29:23.445915       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 15:29:23.422795       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 15:29:23.422802       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 15:29:23.452347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 15:29:23.461991       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 15:29:23.466681       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 15:29:23.467039       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 15:29:23.467049       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 15:29:23.468007       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 15:29:23.468205       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 15:29:23.469918       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 15:29:23.470453       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 15:29:23.472410       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 15:29:23.477358       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 15:29:23.477372       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 15:29:23.484258       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 15:29:23.484357       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 15:29:23.484731       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-516717" podCIDRs=["10.244.0.0/24"]
	I1013 15:29:23.487549       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 15:29:23.517387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 15:29:23.517453       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 15:29:23.517460       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 15:29:23.563235       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [034ea310c76d53f3bcc7338d487d2d4f20c163467ba205608f981b10996fa6dd] <==
	I1013 15:32:05.760004       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 15:32:05.860557       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 15:32:05.860817       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.104"]
	E1013 15:32:05.861940       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 15:32:05.927834       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 15:32:05.928056       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:32:05.928487       1 server_linux.go:132] "Using iptables Proxier"
	I1013 15:32:05.940462       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 15:32:05.942190       1 server.go:527] "Version info" version="v1.34.1"
	I1013 15:32:05.942235       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:32:05.949559       1 config.go:200] "Starting service config controller"
	I1013 15:32:05.949662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 15:32:05.950163       1 config.go:106] "Starting endpoint slice config controller"
	I1013 15:32:05.950172       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 15:32:05.950280       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 15:32:05.950294       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 15:32:05.958499       1 config.go:309] "Starting node config controller"
	I1013 15:32:05.958533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 15:32:05.958542       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 15:32:06.050758       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 15:32:06.050815       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 15:32:06.053929       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f6d977cc58b317b9be2991e680b77068e09df90adedd531606b0a01dc5e2a409] <==
	I1013 15:29:26.189119       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 15:29:26.296992       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 15:29:26.297045       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.104"]
	E1013 15:29:26.297836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 15:29:26.471542       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 15:29:26.472442       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:29:26.472596       1 server_linux.go:132] "Using iptables Proxier"
	I1013 15:29:26.488717       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 15:29:26.489962       1 server.go:527] "Version info" version="v1.34.1"
	I1013 15:29:26.489994       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:29:26.503053       1 config.go:200] "Starting service config controller"
	I1013 15:29:26.503097       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 15:29:26.503127       1 config.go:106] "Starting endpoint slice config controller"
	I1013 15:29:26.503133       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 15:29:26.503150       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 15:29:26.503156       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 15:29:26.504548       1 config.go:309] "Starting node config controller"
	I1013 15:29:26.504575       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 15:29:26.504582       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 15:29:26.603847       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 15:29:26.604190       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 15:29:26.604411       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [19ae15867a847a8163ed2c6d37159c5b71da4795a2b238627c44ea94ae551555] <==
	I1013 15:32:01.995333       1 serving.go:386] Generated self-signed cert in-memory
	W1013 15:32:03.347300       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 15:32:03.347400       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 15:32:03.347415       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 15:32:03.348101       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 15:32:03.462815       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 15:32:03.467695       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:32:03.473134       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:32:03.473619       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:32:03.478883       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 15:32:03.479268       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 15:32:03.574770       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [8a6eeb04ec5821eeaf74ba0e78207ad2cd27bf89df2419de7f4e31e12a209a77] <==
	E1013 15:29:15.556970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 15:29:15.557371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 15:29:15.559353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 15:29:15.559951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 15:29:15.559262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 15:29:15.560246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 15:29:15.560346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 15:29:15.560401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 15:29:16.387743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 15:29:16.402126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 15:29:16.457418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 15:29:16.457418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 15:29:16.519611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 15:29:16.552818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 15:29:16.572076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 15:29:16.728623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 15:29:16.731040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 15:29:16.742824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 15:29:16.803800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 15:29:16.839331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 15:29:16.972398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 15:29:17.010393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 15:29:17.021635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 15:29:17.035244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1013 15:29:19.113174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 15:49:11 embed-certs-516717 kubelet[1040]: E1013 15:49:11.425342    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:49:14 embed-certs-516717 kubelet[1040]: E1013 15:49:14.427840    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:49:16 embed-certs-516717 kubelet[1040]: E1013 15:49:16.428626    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:49:22 embed-certs-516717 kubelet[1040]: I1013 15:49:22.424618    1040 scope.go:117] "RemoveContainer" containerID="a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f"
	Oct 13 15:49:22 embed-certs-516717 kubelet[1040]: E1013 15:49:22.426364    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:49:25 embed-certs-516717 kubelet[1040]: E1013 15:49:25.426620    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:49:31 embed-certs-516717 kubelet[1040]: E1013 15:49:31.426601    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:49:35 embed-certs-516717 kubelet[1040]: I1013 15:49:35.424993    1040 scope.go:117] "RemoveContainer" containerID="a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f"
	Oct 13 15:49:35 embed-certs-516717 kubelet[1040]: E1013 15:49:35.425545    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:49:37 embed-certs-516717 kubelet[1040]: E1013 15:49:37.425474    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:49:42 embed-certs-516717 kubelet[1040]: E1013 15:49:42.426071    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:49:46 embed-certs-516717 kubelet[1040]: I1013 15:49:46.424241    1040 scope.go:117] "RemoveContainer" containerID="a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f"
	Oct 13 15:49:46 embed-certs-516717 kubelet[1040]: E1013 15:49:46.424620    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:49:51 embed-certs-516717 kubelet[1040]: E1013 15:49:51.427434    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:49:56 embed-certs-516717 kubelet[1040]: E1013 15:49:56.429388    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:50:00 embed-certs-516717 kubelet[1040]: I1013 15:50:00.425522    1040 scope.go:117] "RemoveContainer" containerID="a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f"
	Oct 13 15:50:00 embed-certs-516717 kubelet[1040]: E1013 15:50:00.425769    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:50:04 embed-certs-516717 kubelet[1040]: E1013 15:50:04.425630    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:50:07 embed-certs-516717 kubelet[1040]: E1013 15:50:07.426240    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:50:12 embed-certs-516717 kubelet[1040]: I1013 15:50:12.423976    1040 scope.go:117] "RemoveContainer" containerID="a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f"
	Oct 13 15:50:12 embed-certs-516717 kubelet[1040]: E1013 15:50:12.424948    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	Oct 13 15:50:15 embed-certs-516717 kubelet[1040]: E1013 15:50:15.425896    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v4zfv" podUID="424f9607-da65-4bb7-be75-cf1ef1421095"
	Oct 13 15:50:18 embed-certs-516717 kubelet[1040]: E1013 15:50:18.429307    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-qp476" podUID="8c27330f-7d42-4f74-b27d-27701dfc01d2"
	Oct 13 15:50:25 embed-certs-516717 kubelet[1040]: I1013 15:50:25.424748    1040 scope.go:117] "RemoveContainer" containerID="a012e2ab8913fcf5483df204224e9645411aefd7cf9f974c92e6e5a7bc081a1f"
	Oct 13 15:50:25 embed-certs-516717 kubelet[1040]: E1013 15:50:25.424973    1040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6v4dm_kubernetes-dashboard(f2e74f08-a9d6-4657-b401-70f4306d77e2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6v4dm" podUID="f2e74f08-a9d6-4657-b401-70f4306d77e2"
	
	
	==> storage-provisioner [a1eeedac0325f3ca4472865170525536db210d669cc7996f65820d724d30f4c2] <==
	W1013 15:50:01.784533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:03.788958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:03.800620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:05.805601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:05.812114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:07.816518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:07.825967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:09.831399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:09.838467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:11.842800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:11.848662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:13.853102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:13.865377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:15.870681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:15.880393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:17.885849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:17.892971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:19.897691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:19.903986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:21.908554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:21.920395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:23.924704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:23.932499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:25.937519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:50:25.950171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e93a05bb96f31fdbf4186d41077f4f8e665dbf0ddaa6b77822ff6d870340c78b] <==
	I1013 15:32:05.319753       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 15:32:35.338617       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-516717 -n embed-certs-516717
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-516717 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-qp476 kubernetes-dashboard-855c9754f9-v4zfv
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-516717 describe pod metrics-server-746fcd58dc-qp476 kubernetes-dashboard-855c9754f9-v4zfv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-516717 describe pod metrics-server-746fcd58dc-qp476 kubernetes-dashboard-855c9754f9-v4zfv: exit status 1 (65.838056ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-qp476" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-v4zfv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-516717 describe pod metrics-server-746fcd58dc-qp476 kubernetes-dashboard-855c9754f9-v4zfv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z6wz8" [c1d2745a-8b1e-4dd7-878e-d4822a3f956d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1013 15:44:08.365318 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:08.371827 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:08.383318 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:08.405111 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:08.446667 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:08.528270 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:08.690149 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:09.012618 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:09.654116 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:10.936108 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:13.497978 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:44:14.882544 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-10-13 15:53:06.23705005 +0000 UTC m=+7077.177608438
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-426789 describe po kubernetes-dashboard-855c9754f9-z6wz8 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-426789 describe po kubernetes-dashboard-855c9754f9-z6wz8 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-z6wz8
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-426789/192.168.50.176
Start Time:       Mon, 13 Oct 2025 15:43:57 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4zqfg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-4zqfg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m9s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z6wz8 to default-k8s-diff-port-426789
Warning  Failed     7m34s (x3 over 9m3s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    6m13s (x5 over 9m9s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m13s (x5 over 9m3s)   kubelet            Error: ErrImagePull
Warning  Failed     6m13s (x2 over 8m50s)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to pull and unpack image "docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m59s (x20 over 9m3s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m44s (x21 over 9m3s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-426789 logs kubernetes-dashboard-855c9754f9-z6wz8 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-426789 logs kubernetes-dashboard-855c9754f9-z6wz8 -n kubernetes-dashboard: exit status 1 (96.57907ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-z6wz8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-426789 logs kubernetes-dashboard-855c9754f9-z6wz8 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-426789 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-426789 logs -n 25: (1.676064946s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬────
─────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                     ARGS                                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼────
─────────────────┼─────────────────────┤
	│ unpause │ -p old-k8s-version-316150 --alsologtostderr -v=1                                                                                                                                                                                                                              │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ delete  │ -p old-k8s-version-316150                                                                                                                                                                                                                                                     │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ delete  │ -p old-k8s-version-316150                                                                                                                                                                                                                                                     │ old-k8s-version-316150       │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:42 UTC │
	│ start   │ -p newest-cni-400509 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:42 UTC │ 13 Oct 25 15:43 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-426789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                                       │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ start   │ -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-426789 │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-400509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                                       │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ stop    │ -p newest-cni-400509 --alsologtostderr -v=3                                                                                                                                                                                                                                   │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-400509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                                                  │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:43 UTC │
	│ start   │ -p newest-cni-400509 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1 │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:43 UTC │ 13 Oct 25 15:44 UTC │
	│ image   │ newest-cni-400509 image list --format=json                                                                                                                                                                                                                                    │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ pause   │ -p newest-cni-400509 --alsologtostderr -v=1                                                                                                                                                                                                                                   │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ unpause │ -p newest-cni-400509 --alsologtostderr -v=1                                                                                                                                                                                                                                   │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ delete  │ -p newest-cni-400509                                                                                                                                                                                                                                                          │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ delete  │ -p newest-cni-400509                                                                                                                                                                                                                                                          │ newest-cni-400509            │ jenkins │ v1.37.0 │ 13 Oct 25 15:44 UTC │ 13 Oct 25 15:44 UTC │
	│ image   │ no-preload-673307 image list --format=json                                                                                                                                                                                                                                    │ no-preload-673307            │ jenkins │ v1.37.0 │ 13 Oct 25 15:49 UTC │ 13 Oct 25 15:49 UTC │
	│ pause   │ -p no-preload-673307 --alsologtostderr -v=1                                                                                                                                                                                                                                   │ no-preload-673307            │ jenkins │ v1.37.0 │ 13 Oct 25 15:49 UTC │ 13 Oct 25 15:49 UTC │
	│ unpause │ -p no-preload-673307 --alsologtostderr -v=1                                                                                                                                                                                                                                   │ no-preload-673307            │ jenkins │ v1.37.0 │ 13 Oct 25 15:49 UTC │ 13 Oct 25 15:49 UTC │
	│ delete  │ -p no-preload-673307                                                                                                                                                                                                                                                          │ no-preload-673307            │ jenkins │ v1.37.0 │ 13 Oct 25 15:49 UTC │ 13 Oct 25 15:49 UTC │
	│ delete  │ -p no-preload-673307                                                                                                                                                                                                                                                          │ no-preload-673307            │ jenkins │ v1.37.0 │ 13 Oct 25 15:49 UTC │ 13 Oct 25 15:49 UTC │
	│ image   │ embed-certs-516717 image list --format=json                                                                                                                                                                                                                                   │ embed-certs-516717           │ jenkins │ v1.37.0 │ 13 Oct 25 15:50 UTC │ 13 Oct 25 15:50 UTC │
	│ pause   │ -p embed-certs-516717 --alsologtostderr -v=1                                                                                                                                                                                                                                  │ embed-certs-516717           │ jenkins │ v1.37.0 │ 13 Oct 25 15:50 UTC │ 13 Oct 25 15:50 UTC │
	│ unpause │ -p embed-certs-516717 --alsologtostderr -v=1                                                                                                                                                                                                                                  │ embed-certs-516717           │ jenkins │ v1.37.0 │ 13 Oct 25 15:50 UTC │ 13 Oct 25 15:50 UTC │
	│ delete  │ -p embed-certs-516717                                                                                                                                                                                                                                                         │ embed-certs-516717           │ jenkins │ v1.37.0 │ 13 Oct 25 15:50 UTC │ 13 Oct 25 15:50 UTC │
	│ delete  │ -p embed-certs-516717                                                                                                                                                                                                                                                         │ embed-certs-516717           │ jenkins │ v1.37.0 │ 13 Oct 25 15:50 UTC │ 13 Oct 25 15:50 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴────
─────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 15:43:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 15:43:36.713594 1881569 out.go:360] Setting OutFile to fd 1 ...
	I1013 15:43:36.713867 1881569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:43:36.713876 1881569 out.go:374] Setting ErrFile to fd 2...
	I1013 15:43:36.713881 1881569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:43:36.714128 1881569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 15:43:36.714601 1881569 out.go:368] Setting JSON to false
	I1013 15:43:36.715659 1881569 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":26765,"bootTime":1760343452,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 15:43:36.715764 1881569 start.go:141] virtualization: kvm guest
	I1013 15:43:36.717879 1881569 out.go:179] * [newest-cni-400509] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 15:43:36.719306 1881569 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 15:43:36.719352 1881569 notify.go:220] Checking for updates...
	I1013 15:43:36.722297 1881569 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 15:43:36.723784 1881569 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:43:36.728380 1881569 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:43:36.729831 1881569 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 15:43:36.731178 1881569 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 15:43:36.733044 1881569 config.go:182] Loaded profile config "newest-cni-400509": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:36.733466 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:36.733553 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:36.748649 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1013 15:43:36.749362 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:36.749950 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:43:36.749983 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:36.750498 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:36.750765 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:36.751059 1881569 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 15:43:36.751384 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:36.751424 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:36.766235 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I1013 15:43:36.766738 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:36.767297 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:43:36.767322 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:36.767684 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:36.767908 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:36.805154 1881569 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 15:43:36.806336 1881569 start.go:305] selected driver: kvm2
	I1013 15:43:36.806354 1881569 start.go:925] validating driver "kvm2" against &{Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTi
meout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:43:36.806467 1881569 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 15:43:36.807212 1881569 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:43:36.807326 1881569 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:43:36.823011 1881569 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:43:36.823050 1881569 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 15:43:36.837875 1881569 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 15:43:36.838417 1881569 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 15:43:36.838458 1881569 cni.go:84] Creating CNI manager for ""
	I1013 15:43:36.838518 1881569 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:36.838573 1881569 start.go:349] cluster config:
	{Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400509 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:43:36.838736 1881569 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 15:43:36.841828 1881569 out.go:179] * Starting "newest-cni-400509" primary control-plane node in "newest-cni-400509" cluster
	I1013 15:43:35.461409 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | SSH cmd err, output: exit status 255: 
	I1013 15:43:35.461442 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1013 15:43:35.461456 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | command : exit 0
	I1013 15:43:35.461470 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | err     : exit status 255
	I1013 15:43:35.461482 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | output  : 
	I1013 15:43:38.463606 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Getting to WaitForSSH function...
	I1013 15:43:38.467055 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.467542 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.467571 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.467755 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Using SSH client type: external
	I1013 15:43:38.467781 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa (-rw-------)
	I1013 15:43:38.467825 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:43:38.467840 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | About to run SSH command:
	I1013 15:43:38.467903 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | exit 0
	I1013 15:43:36.843198 1881569 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:43:36.843293 1881569 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1013 15:43:36.843334 1881569 cache.go:58] Caching tarball of preloaded images
	I1013 15:43:36.843490 1881569 preload.go:233] Found /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1013 15:43:36.843509 1881569 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1013 15:43:36.843683 1881569 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/config.json ...
	I1013 15:43:36.843944 1881569 start.go:360] acquireMachinesLock for newest-cni-400509: {Name:mk84c008353cc80ba3c6cf364c26cb6563e060bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 15:43:39.632101 1881569 start.go:364] duration metric: took 2.788099128s to acquireMachinesLock for "newest-cni-400509"
	I1013 15:43:39.632152 1881569 start.go:96] Skipping create...Using existing machine configuration
	I1013 15:43:39.632159 1881569 fix.go:54] fixHost starting: 
	I1013 15:43:39.632598 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:39.632657 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:39.649454 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37131
	I1013 15:43:39.650005 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:39.650546 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:43:39.650575 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:39.651029 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:39.651238 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:39.651401 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:43:39.654204 1881569 fix.go:112] recreateIfNeeded on newest-cni-400509: state=Stopped err=<nil>
	I1013 15:43:39.654249 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	W1013 15:43:39.654457 1881569 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 15:43:39.656851 1881569 out.go:252] * Restarting existing kvm2 VM for "newest-cni-400509" ...
	I1013 15:43:39.656907 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Start
	I1013 15:43:39.657076 1881569 main.go:141] libmachine: (newest-cni-400509) starting domain...
	I1013 15:43:39.657101 1881569 main.go:141] libmachine: (newest-cni-400509) ensuring networks are active...
	I1013 15:43:39.657900 1881569 main.go:141] libmachine: (newest-cni-400509) Ensuring network default is active
	I1013 15:43:39.658431 1881569 main.go:141] libmachine: (newest-cni-400509) Ensuring network mk-newest-cni-400509 is active
	I1013 15:43:39.658999 1881569 main.go:141] libmachine: (newest-cni-400509) getting domain XML...
	I1013 15:43:39.660153 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | starting domain XML:
	I1013 15:43:39.660177 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | <domain type='kvm'>
	I1013 15:43:39.660215 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <name>newest-cni-400509</name>
	I1013 15:43:39.660260 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <uuid>27888586-a2e0-44db-a3c9-b78f39af9148</uuid>
	I1013 15:43:39.660278 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <memory unit='KiB'>3145728</memory>
	I1013 15:43:39.660290 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1013 15:43:39.660307 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 15:43:39.660324 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <os>
	I1013 15:43:39.660338 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 15:43:39.660350 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <boot dev='cdrom'/>
	I1013 15:43:39.660363 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <boot dev='hd'/>
	I1013 15:43:39.660374 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <bootmenu enable='no'/>
	I1013 15:43:39.660381 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   </os>
	I1013 15:43:39.660390 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <features>
	I1013 15:43:39.660431 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <acpi/>
	I1013 15:43:39.660458 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <apic/>
	I1013 15:43:39.660475 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <pae/>
	I1013 15:43:39.660482 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   </features>
	I1013 15:43:39.660495 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 15:43:39.660517 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <clock offset='utc'/>
	I1013 15:43:39.660527 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 15:43:39.660535 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <on_reboot>restart</on_reboot>
	I1013 15:43:39.660544 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <on_crash>destroy</on_crash>
	I1013 15:43:39.660554 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   <devices>
	I1013 15:43:39.660565 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 15:43:39.660576 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <disk type='file' device='cdrom'>
	I1013 15:43:39.660585 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <driver name='qemu' type='raw'/>
	I1013 15:43:39.660601 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/boot2docker.iso'/>
	I1013 15:43:39.660614 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 15:43:39.660624 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <readonly/>
	I1013 15:43:39.660636 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 15:43:39.660645 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </disk>
	I1013 15:43:39.660655 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <disk type='file' device='disk'>
	I1013 15:43:39.660666 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 15:43:39.660683 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source file='/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/newest-cni-400509.rawdisk'/>
	I1013 15:43:39.660701 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target dev='hda' bus='virtio'/>
	I1013 15:43:39.660725 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 15:43:39.660734 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </disk>
	I1013 15:43:39.660746 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 15:43:39.660766 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 15:43:39.660777 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </controller>
	I1013 15:43:39.660795 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 15:43:39.660809 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 15:43:39.660833 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 15:43:39.660845 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </controller>
	I1013 15:43:39.660852 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <interface type='network'>
	I1013 15:43:39.660865 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <mac address='52:54:00:a8:3a:80'/>
	I1013 15:43:39.660880 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source network='mk-newest-cni-400509'/>
	I1013 15:43:39.660909 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <model type='virtio'/>
	I1013 15:43:39.660934 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 15:43:39.660966 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </interface>
	I1013 15:43:39.660982 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <interface type='network'>
	I1013 15:43:39.660998 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <mac address='52:54:00:ee:bd:4a'/>
	I1013 15:43:39.661014 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <source network='default'/>
	I1013 15:43:39.661026 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <model type='virtio'/>
	I1013 15:43:39.661044 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 15:43:39.661064 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </interface>
	I1013 15:43:39.661072 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <serial type='pty'>
	I1013 15:43:39.661080 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target type='isa-serial' port='0'>
	I1013 15:43:39.661093 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |         <model name='isa-serial'/>
	I1013 15:43:39.661105 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       </target>
	I1013 15:43:39.661112 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </serial>
	I1013 15:43:39.661125 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <console type='pty'>
	I1013 15:43:39.661132 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <target type='serial' port='0'/>
	I1013 15:43:39.661139 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </console>
	I1013 15:43:39.661146 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <input type='mouse' bus='ps2'/>
	I1013 15:43:39.661173 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 15:43:39.661192 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <audio id='1' type='none'/>
	I1013 15:43:39.661213 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <memballoon model='virtio'>
	I1013 15:43:39.661263 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 15:43:39.661276 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </memballoon>
	I1013 15:43:39.661285 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     <rng model='virtio'>
	I1013 15:43:39.661305 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <backend model='random'>/dev/random</backend>
	I1013 15:43:39.661325 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 15:43:39.661337 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |     </rng>
	I1013 15:43:39.661348 1881569 main.go:141] libmachine: (newest-cni-400509) DBG |   </devices>
	I1013 15:43:39.661357 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | </domain>
	I1013 15:43:39.661367 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | 
	I1013 15:43:40.126826 1881569 main.go:141] libmachine: (newest-cni-400509) waiting for domain to start...
	I1013 15:43:40.128784 1881569 main.go:141] libmachine: (newest-cni-400509) domain is now running
	I1013 15:43:40.128813 1881569 main.go:141] libmachine: (newest-cni-400509) waiting for IP...
	I1013 15:43:40.129922 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.130919 1881569 main.go:141] libmachine: (newest-cni-400509) found domain IP: 192.168.39.58
	I1013 15:43:40.130941 1881569 main.go:141] libmachine: (newest-cni-400509) reserving static IP address...
	I1013 15:43:40.130955 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has current primary IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.131624 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "newest-cni-400509", mac: "52:54:00:a8:3a:80", ip: "192.168.39.58"} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:42:58 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:40.131659 1881569 main.go:141] libmachine: (newest-cni-400509) reserved static IP address 192.168.39.58 for domain newest-cni-400509
	I1013 15:43:40.131687 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | skip adding static IP to network mk-newest-cni-400509 - found existing host DHCP lease matching {name: "newest-cni-400509", mac: "52:54:00:a8:3a:80", ip: "192.168.39.58"}
	I1013 15:43:40.131707 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Getting to WaitForSSH function...
	I1013 15:43:40.131747 1881569 main.go:141] libmachine: (newest-cni-400509) waiting for SSH...
	I1013 15:43:40.134418 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.134976 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:42:58 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:40.135005 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:40.135191 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH client type: external
	I1013 15:43:40.135247 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa (-rw-------)
	I1013 15:43:40.135291 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:43:40.135327 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | About to run SSH command:
	I1013 15:43:40.135339 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | exit 0
	I1013 15:43:38.610349 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | SSH cmd err, output: <nil>: 
	I1013 15:43:38.610819 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetConfigRaw
	I1013 15:43:38.611609 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:38.614998 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.615542 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.615574 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.615849 1881287 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/config.json ...
	I1013 15:43:38.616089 1881287 machine.go:93] provisionDockerMachine start ...
	I1013 15:43:38.616107 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:38.616354 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:38.619808 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.620495 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.620528 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.620763 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:38.620947 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.621205 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.621440 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:38.621677 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:38.621969 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:38.621982 1881287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 15:43:38.741296 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1013 15:43:38.741340 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:43:38.741648 1881287 buildroot.go:166] provisioning hostname "default-k8s-diff-port-426789"
	I1013 15:43:38.741682 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:43:38.741931 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:38.745516 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.746082 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.746124 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.746340 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:38.746557 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.746778 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.746938 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:38.747114 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:38.747384 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:38.747401 1881287 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-426789 && echo "default-k8s-diff-port-426789" | sudo tee /etc/hostname
	I1013 15:43:38.883536 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-426789
	
	I1013 15:43:38.883566 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:38.886934 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.887401 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:38.887445 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:38.887640 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:38.887893 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.888084 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:38.888211 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:38.888374 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:38.888582 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:38.888599 1881287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-426789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-426789/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-426789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 15:43:39.017088 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 15:43:39.017119 1881287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 15:43:39.017144 1881287 buildroot.go:174] setting up certificates
	I1013 15:43:39.017158 1881287 provision.go:84] configureAuth start
	I1013 15:43:39.017194 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetMachineName
	I1013 15:43:39.017591 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:39.020991 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.021443 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.021466 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.021667 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.024308 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.024740 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.024775 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.025056 1881287 provision.go:143] copyHostCerts
	I1013 15:43:39.025124 1881287 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem, removing ...
	I1013 15:43:39.025142 1881287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem
	I1013 15:43:39.025243 1881287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 15:43:39.025421 1881287 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem, removing ...
	I1013 15:43:39.025436 1881287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem
	I1013 15:43:39.025483 1881287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 15:43:39.025608 1881287 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem, removing ...
	I1013 15:43:39.025622 1881287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem
	I1013 15:43:39.025662 1881287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 15:43:39.025772 1881287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-426789 san=[127.0.0.1 192.168.50.176 default-k8s-diff-port-426789 localhost minikube]
	I1013 15:43:39.142099 1881287 provision.go:177] copyRemoteCerts
	I1013 15:43:39.142168 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 15:43:39.142198 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.146110 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.146639 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.146665 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.146950 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.147180 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.147364 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.147518 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.238167 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 15:43:39.273616 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 15:43:39.314055 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 15:43:39.358579 1881287 provision.go:87] duration metric: took 341.404418ms to configureAuth
	I1013 15:43:39.358616 1881287 buildroot.go:189] setting minikube options for container-runtime
	I1013 15:43:39.358839 1881287 config.go:182] Loaded profile config "default-k8s-diff-port-426789": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:39.358854 1881287 machine.go:96] duration metric: took 742.756264ms to provisionDockerMachine
	I1013 15:43:39.358864 1881287 start.go:293] postStartSetup for "default-k8s-diff-port-426789" (driver="kvm2")
	I1013 15:43:39.358874 1881287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 15:43:39.358903 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.359307 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 15:43:39.359349 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.362558 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.362951 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.362982 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.363306 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.363546 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.363773 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.363949 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.454925 1881287 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 15:43:39.460515 1881287 info.go:137] Remote host: Buildroot 2025.02
	I1013 15:43:39.460550 1881287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 15:43:39.460650 1881287 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 15:43:39.460784 1881287 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem -> 18149272.pem in /etc/ssl/certs
	I1013 15:43:39.460899 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 15:43:39.474542 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:43:39.506976 1881287 start.go:296] duration metric: took 148.091906ms for postStartSetup
	I1013 15:43:39.507038 1881287 fix.go:56] duration metric: took 15.862602997s for fixHost
	I1013 15:43:39.507067 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.510376 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.510803 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.510837 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.511112 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.511361 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.511540 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.511666 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.511848 1881287 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:39.512046 1881287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.176 22 <nil> <nil>}
	I1013 15:43:39.512057 1881287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 15:43:39.631899 1881287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760370219.586411289
	
	I1013 15:43:39.631925 1881287 fix.go:216] guest clock: 1760370219.586411289
	I1013 15:43:39.631933 1881287 fix.go:229] Guest: 2025-10-13 15:43:39.586411289 +0000 UTC Remote: 2025-10-13 15:43:39.507044166 +0000 UTC m=+16.050668033 (delta=79.367123ms)
	I1013 15:43:39.631970 1881287 fix.go:200] guest clock delta is within tolerance: 79.367123ms
	I1013 15:43:39.631976 1881287 start.go:83] releasing machines lock for "default-k8s-diff-port-426789", held for 15.987562481s
	I1013 15:43:39.632004 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.632313 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:39.636049 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.636504 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.636554 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.636797 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.637455 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.637669 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:39.637818 1881287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 15:43:39.637878 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.637920 1881287 ssh_runner.go:195] Run: cat /version.json
	I1013 15:43:39.637952 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:39.641477 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.641517 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.641994 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.642042 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:39.642070 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.642087 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:39.642314 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.642327 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:39.642551 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.642554 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:39.642858 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.642902 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:39.643095 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.643095 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:39.734708 1881287 ssh_runner.go:195] Run: systemctl --version
	I1013 15:43:39.760037 1881287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 15:43:39.768523 1881287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 15:43:39.768671 1881287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 15:43:39.792919 1881287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 15:43:39.792950 1881287 start.go:495] detecting cgroup driver to use...
	I1013 15:43:39.793023 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 15:43:39.831232 1881287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 15:43:39.850993 1881287 docker.go:218] disabling cri-docker service (if available) ...
	I1013 15:43:39.851102 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 15:43:39.873826 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 15:43:39.896556 1881287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 15:43:40.064028 1881287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 15:43:40.305591 1881287 docker.go:234] disabling docker service ...
	I1013 15:43:40.305667 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 15:43:40.324329 1881287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 15:43:40.340817 1881287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 15:43:40.541438 1881287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 15:43:40.704419 1881287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 15:43:40.723755 1881287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 15:43:40.752026 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 15:43:40.767452 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 15:43:40.782881 1881287 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 15:43:40.782958 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 15:43:40.798473 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:40.813327 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 15:43:40.828869 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:40.843772 1881287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 15:43:40.859620 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 15:43:40.876007 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 15:43:40.891780 1881287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 15:43:40.907887 1881287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 15:43:40.919493 1881287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 15:43:40.919559 1881287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 15:43:40.950308 1881287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 15:43:40.968591 1881287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:41.139186 1881287 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 15:43:41.183301 1881287 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 15:43:41.183403 1881287 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:41.190223 1881287 retry.go:31] will retry after 1.16806029s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 15:43:42.358579 1881287 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:42.366926 1881287 start.go:563] Will wait 60s for crictl version
	I1013 15:43:42.367063 1881287 ssh_runner.go:195] Run: which crictl
	I1013 15:43:42.372655 1881287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 15:43:42.429723 1881287 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 15:43:42.429814 1881287 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:42.471739 1881287 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:42.509604 1881287 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 15:43:42.511075 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetIP
	I1013 15:43:42.514790 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:42.515349 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:42.515383 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:42.515708 1881287 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1013 15:43:42.520820 1881287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:43:42.537702 1881287 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 15:43:42.537834 1881287 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:43:42.537882 1881287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:42.577897 1881287 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:42.577934 1881287 containerd.go:534] Images already preloaded, skipping extraction
	I1013 15:43:42.578012 1881287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:42.626753 1881287 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:42.626790 1881287 cache_images.go:85] Images are preloaded, skipping loading
	I1013 15:43:42.626816 1881287 kubeadm.go:934] updating node { 192.168.50.176 8444 v1.34.1 containerd true true} ...
	I1013 15:43:42.626973 1881287 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-426789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 15:43:42.627112 1881287 ssh_runner.go:195] Run: sudo crictl info
	I1013 15:43:42.670994 1881287 cni.go:84] Creating CNI manager for ""
	I1013 15:43:42.671035 1881287 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:42.671067 1881287 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 15:43:42.671108 1881287 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.176 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-426789 NodeName:default-k8s-diff-port-426789 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 15:43:42.671296 1881287 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.176
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-426789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 15:43:42.671382 1881287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 15:43:42.685850 1881287 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 15:43:42.685938 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 15:43:42.702293 1881287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1013 15:43:42.726402 1881287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 15:43:42.754908 1881287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2247 bytes)
	I1013 15:43:42.782246 1881287 ssh_runner.go:195] Run: grep 192.168.50.176	control-plane.minikube.internal$ /etc/hosts
	I1013 15:43:42.788445 1881287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:43:42.806629 1881287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:42.987595 1881287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:43:43.027112 1881287 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789 for IP: 192.168.50.176
	I1013 15:43:43.027140 1881287 certs.go:195] generating shared ca certs ...
	I1013 15:43:43.027163 1881287 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:43.027383 1881287 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 15:43:43.027460 1881287 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 15:43:43.027483 1881287 certs.go:257] generating profile certs ...
	I1013 15:43:43.027635 1881287 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/client.key
	I1013 15:43:43.027760 1881287 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key.1e9a3db8
	I1013 15:43:43.027826 1881287 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.key
	I1013 15:43:43.027999 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem (1338 bytes)
	W1013 15:43:43.028050 1881287 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927_empty.pem, impossibly tiny 0 bytes
	I1013 15:43:43.028066 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 15:43:43.028098 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 15:43:43.028131 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 15:43:43.028163 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 15:43:43.028239 1881287 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:43:43.029002 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 15:43:43.082431 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 15:43:43.140436 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 15:43:43.210359 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 15:43:43.257226 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 15:43:43.298663 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 15:43:43.332285 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 15:43:43.369205 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/default-k8s-diff-port-426789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 15:43:43.410586 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 15:43:43.451819 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem --> /usr/share/ca-certificates/1814927.pem (1338 bytes)
	I1013 15:43:43.486367 1881287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /usr/share/ca-certificates/18149272.pem (1708 bytes)
	I1013 15:43:43.524801 1881287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 15:43:43.547937 1881287 ssh_runner.go:195] Run: openssl version
	I1013 15:43:43.555474 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 15:43:43.571070 1881287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:43:43.579175 1881287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:43:43.579263 1881287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:43:43.587603 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 15:43:43.604566 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1814927.pem && ln -fs /usr/share/ca-certificates/1814927.pem /etc/ssl/certs/1814927.pem"
	I1013 15:43:43.620309 1881287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1814927.pem
	I1013 15:43:43.626957 1881287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 14:22 /usr/share/ca-certificates/1814927.pem
	I1013 15:43:43.627045 1881287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1814927.pem
	I1013 15:43:43.635543 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1814927.pem /etc/ssl/certs/51391683.0"
	I1013 15:43:43.651153 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18149272.pem && ln -fs /usr/share/ca-certificates/18149272.pem /etc/ssl/certs/18149272.pem"
	I1013 15:43:43.666800 1881287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18149272.pem
	I1013 15:43:43.674478 1881287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 14:22 /usr/share/ca-certificates/18149272.pem
	I1013 15:43:43.674540 1881287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18149272.pem
	I1013 15:43:43.685525 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18149272.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 15:43:43.702224 1881287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 15:43:43.709862 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 15:43:43.720756 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 15:43:43.729444 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 15:43:43.737616 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 15:43:43.745934 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 15:43:43.754091 1881287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 15:43:43.762115 1881287 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-426789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:default-k8s-diff-port-426789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddre
ss: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:43:43.762208 1881287 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 15:43:43.762293 1881287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:43:43.808267 1881287 cri.go:89] found id: "7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8"
	I1013 15:43:43.808301 1881287 cri.go:89] found id: "23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3"
	I1013 15:43:43.808306 1881287 cri.go:89] found id: "5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c"
	I1013 15:43:43.808312 1881287 cri.go:89] found id: "72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2"
	I1013 15:43:43.808316 1881287 cri.go:89] found id: "f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996"
	I1013 15:43:43.808322 1881287 cri.go:89] found id: "d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929"
	I1013 15:43:43.808327 1881287 cri.go:89] found id: "ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547"
	I1013 15:43:43.808338 1881287 cri.go:89] found id: ""
	I1013 15:43:43.808404 1881287 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1013 15:43:43.831377 1881287 kubeadm.go:407] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T15:43:43Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1013 15:43:43.831483 1881287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 15:43:43.845227 1881287 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 15:43:43.845260 1881287 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 15:43:43.845327 1881287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 15:43:43.863194 1881287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 15:43:43.864292 1881287 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-426789" does not appear in /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:43:43.864923 1881287 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-1810975/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-426789" cluster setting kubeconfig missing "default-k8s-diff-port-426789" context setting]
	I1013 15:43:43.865728 1881287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:43.867585 1881287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 15:43:43.883585 1881287 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.50.176
	I1013 15:43:43.883642 1881287 kubeadm.go:1160] stopping kube-system containers ...
	I1013 15:43:43.883662 1881287 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1013 15:43:43.883756 1881287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:43:43.948818 1881287 cri.go:89] found id: "7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8"
	I1013 15:43:43.948851 1881287 cri.go:89] found id: "23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3"
	I1013 15:43:43.948857 1881287 cri.go:89] found id: "5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c"
	I1013 15:43:43.948863 1881287 cri.go:89] found id: "72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2"
	I1013 15:43:43.948868 1881287 cri.go:89] found id: "f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996"
	I1013 15:43:43.948872 1881287 cri.go:89] found id: "d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929"
	I1013 15:43:43.948876 1881287 cri.go:89] found id: "ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547"
	I1013 15:43:43.948880 1881287 cri.go:89] found id: ""
	I1013 15:43:43.948890 1881287 cri.go:252] Stopping containers: [7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8 23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3 5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c 72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2 f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996 d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929 ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547]
	I1013 15:43:43.948976 1881287 ssh_runner.go:195] Run: which crictl
	I1013 15:43:43.955264 1881287 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 7720a942500c9b821d94a5f2fc11f8b31a4bb4216ac0d666abc2fca30f5ed2e8 23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3 5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c 72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2 f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996 d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929 ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547
	I1013 15:43:44.001390 1881287 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1013 15:43:44.022439 1881287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 15:43:44.035325 1881287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 15:43:44.035351 1881287 kubeadm.go:157] found existing configuration files:
	
	I1013 15:43:44.035411 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1013 15:43:44.047208 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 15:43:44.047292 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 15:43:44.060647 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1013 15:43:44.074202 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 15:43:44.074279 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 15:43:44.088532 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1013 15:43:44.103533 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 15:43:44.103601 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 15:43:44.122077 1881287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1013 15:43:44.134937 1881287 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 15:43:44.135018 1881287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 15:43:44.147842 1881287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 15:43:44.162447 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:44.318010 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:45.992643 1881287 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.674585761s)
	I1013 15:43:45.992768 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:46.260999 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:46.358031 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:46.484897 1881287 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:43:46.485026 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:46.986001 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:47.485965 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:47.985368 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:48.031141 1881287 api_server.go:72] duration metric: took 1.546261555s to wait for apiserver process to appear ...
	I1013 15:43:48.031174 1881287 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:43:48.031199 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:51.397143 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | SSH cmd err, output: exit status 255: 
	I1013 15:43:51.397186 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1013 15:43:51.397205 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | command : exit 0
	I1013 15:43:51.397214 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | err     : exit status 255
	I1013 15:43:51.397235 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | output  : 
	I1013 15:43:50.751338 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:43:50.751376 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:43:50.751412 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:50.842254 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:43:50.842294 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:43:51.031709 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:51.038850 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:43:51.038888 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:43:51.531498 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:51.540163 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:43:51.540193 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:43:52.031686 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:52.042465 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:43:52.042504 1881287 api_server.go:103] status: https://192.168.50.176:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:43:52.531913 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:52.538420 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 200:
	ok
	I1013 15:43:52.550202 1881287 api_server.go:141] control plane version: v1.34.1
	I1013 15:43:52.550246 1881287 api_server.go:131] duration metric: took 4.519061614s to wait for apiserver health ...
	I1013 15:43:52.550262 1881287 cni.go:84] Creating CNI manager for ""
	I1013 15:43:52.550273 1881287 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:52.552571 1881287 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 15:43:52.554067 1881287 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 15:43:52.574739 1881287 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 15:43:52.604706 1881287 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:43:52.613468 1881287 system_pods.go:59] 8 kube-system pods found
	I1013 15:43:52.613525 1881287 system_pods.go:61] "coredns-66bc5c9577-7mm74" [a6965960-a658-468c-a225-0a99e4ee6d29] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:43:52.613537 1881287 system_pods.go:61] "etcd-default-k8s-diff-port-426789" [97d29e80-2aae-46cb-b01c-2c94280cd2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:43:52.613547 1881287 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-426789" [b6f928ae-7bf8-48a8-b3df-251e2c47c935] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:43:52.613563 1881287 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-426789" [fffd4380-39d1-482a-a943-ac4ce7f67a82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:43:52.613576 1881287 system_pods.go:61] "kube-proxy-2vt8l" [1bae3750-c6df-46d8-8b33-130e1773600a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 15:43:52.613595 1881287 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-426789" [1cf8ece0-4fbc-4ab1-9ec8-d206af58f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:43:52.613609 1881287 system_pods.go:61] "metrics-server-746fcd58dc-mqvqg" [e7582897-ca82-4255-9bc3-8e9563b9e410] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:43:52.613617 1881287 system_pods.go:61] "storage-provisioner" [ff2ac22d-9091-4b0c-b7fd-0c2e3e7c0062] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 15:43:52.613628 1881287 system_pods.go:74] duration metric: took 8.879878ms to wait for pod list to return data ...
	I1013 15:43:52.613643 1881287 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:43:52.618132 1881287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:43:52.618175 1881287 node_conditions.go:123] node cpu capacity is 2
	I1013 15:43:52.618192 1881287 node_conditions.go:105] duration metric: took 4.543501ms to run NodePressure ...
	I1013 15:43:52.618275 1881287 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:43:53.069625 1881287 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1013 15:43:53.076322 1881287 kubeadm.go:743] kubelet initialised
	I1013 15:43:53.076353 1881287 kubeadm.go:744] duration metric: took 6.69335ms waiting for restarted kubelet to initialise ...
	I1013 15:43:53.076378 1881287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 15:43:53.108126 1881287 ops.go:34] apiserver oom_adj: -16
	I1013 15:43:53.108163 1881287 kubeadm.go:601] duration metric: took 9.262892964s to restartPrimaryControlPlane
	I1013 15:43:53.108181 1881287 kubeadm.go:402] duration metric: took 9.346075744s to StartCluster
	I1013 15:43:53.108210 1881287 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:53.108336 1881287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:43:53.110574 1881287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:43:53.111002 1881287 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.176 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:43:53.111137 1881287 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 15:43:53.111274 1881287 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111277 1881287 config.go:182] Loaded profile config "default-k8s-diff-port-426789": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:53.111300 1881287 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-426789"
	W1013 15:43:53.111313 1881287 addons.go:247] addon storage-provisioner should already be in state true
	I1013 15:43:53.111324 1881287 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111339 1881287 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111346 1881287 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-426789"
	I1013 15:43:53.111350 1881287 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-426789"
	W1013 15:43:53.111359 1881287 addons.go:247] addon dashboard should already be in state true
	W1013 15:43:53.111360 1881287 addons.go:247] addon metrics-server should already be in state true
	I1013 15:43:53.111379 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.111387 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.111402 1881287 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-426789"
	I1013 15:43:53.111347 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.111445 1881287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-426789"
	I1013 15:43:53.111808 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.111805 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.111835 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.111848 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.111868 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.111964 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.112184 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.112238 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.115926 1881287 out.go:179] * Verifying Kubernetes components...
	I1013 15:43:53.117837 1881287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:53.131021 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
	I1013 15:43:53.131145 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I1013 15:43:53.131263 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44749
	I1013 15:43:53.131306 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1013 15:43:53.131780 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.131963 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.132182 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.132306 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.132328 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.132489 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.132502 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.132656 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.132786 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.132818 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.132923 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.132945 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.133266 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.133335 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.133352 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.133493 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.133868 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.133922 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.134084 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.134115 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.134175 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.135005 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.135097 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.138473 1881287 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-426789"
	W1013 15:43:53.138535 1881287 addons.go:247] addon default-storageclass should already be in state true
	I1013 15:43:53.138571 1881287 host.go:66] Checking if "default-k8s-diff-port-426789" exists ...
	I1013 15:43:53.138951 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.138996 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.153375 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I1013 15:43:53.154086 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I1013 15:43:53.154354 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.154973 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.155287 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.155384 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.155522 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.155588 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.155980 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.156055 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.156311 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.156695 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.159943 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.160580 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.161397 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I1013 15:43:53.161596 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I1013 15:43:53.162371 1881287 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 15:43:53.162442 1881287 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 15:43:53.162491 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.162623 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.163108 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.163158 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.163241 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.163269 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.163621 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.163868 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.163948 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.164392 1881287 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:43:53.164414 1881287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 15:43:53.164436 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.164610 1881287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:43:53.164680 1881287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:43:53.165704 1881287 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 15:43:53.167086 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 15:43:53.167111 1881287 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 15:43:53.167145 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.167519 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.169405 1881287 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1013 15:43:53.170806 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 15:43:53.170839 1881287 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 15:43:53.170868 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.170970 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.172904 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.172958 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.173486 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.174763 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.175298 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.175869 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:53.177546 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.178363 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.179072 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.179191 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.179380 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.179403 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.179451 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.179501 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.179539 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.179550 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.179763 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.179830 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.179923 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:53.180049 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:53.188031 1881287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I1013 15:43:53.188746 1881287 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:43:53.189369 1881287 main.go:141] libmachine: Using API Version  1
	I1013 15:43:53.189391 1881287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:43:53.189889 1881287 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:43:53.190124 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetState
	I1013 15:43:53.192665 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .DriverName
	I1013 15:43:53.192993 1881287 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 15:43:53.193015 1881287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 15:43:53.193041 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHHostname
	I1013 15:43:53.197517 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.198127 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:df:00", ip: ""} in network mk-default-k8s-diff-port-426789: {Iface:virbr2 ExpiryTime:2025-10-13 16:43:36 +0000 UTC Type:0 Mac:52:54:00:07:df:00 Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:default-k8s-diff-port-426789 Clientid:01:52:54:00:07:df:00}
	I1013 15:43:53.198171 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | domain default-k8s-diff-port-426789 has defined IP address 192.168.50.176 and MAC address 52:54:00:07:df:00 in network mk-default-k8s-diff-port-426789
	I1013 15:43:53.198708 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHPort
	I1013 15:43:53.198952 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHKeyPath
	I1013 15:43:53.199191 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .GetSSHUsername
	I1013 15:43:53.199425 1881287 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/default-k8s-diff-port-426789/id_rsa Username:docker}
	I1013 15:43:54.398978 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Getting to WaitForSSH function...
	I1013 15:43:54.402868 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.403485 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.403522 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.403692 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH client type: external
	I1013 15:43:54.403735 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa (-rw-------)
	I1013 15:43:54.403786 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 15:43:54.403800 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | About to run SSH command:
	I1013 15:43:54.403823 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | exit 0
	I1013 15:43:54.544257 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | SSH cmd err, output: <nil>: 
	I1013 15:43:54.544730 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetConfigRaw
	I1013 15:43:54.545413 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:54.549394 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.550047 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.550090 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.550494 1881569 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/config.json ...
	I1013 15:43:54.550797 1881569 machine.go:93] provisionDockerMachine start ...
	I1013 15:43:54.550830 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:54.551132 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.554299 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.554707 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.554754 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.554943 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:54.555175 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.555424 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.555617 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:54.555946 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:54.556248 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:54.556260 1881569 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 15:43:54.688707 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1013 15:43:54.688778 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetMachineName
	I1013 15:43:54.689138 1881569 buildroot.go:166] provisioning hostname "newest-cni-400509"
	I1013 15:43:54.689168 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetMachineName
	I1013 15:43:54.689397 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.693596 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.694246 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.694300 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.694537 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:54.694811 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.695013 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.695198 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:54.695392 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:54.695702 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:54.695740 1881569 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-400509 && echo "newest-cni-400509" | sudo tee /etc/hostname
	I1013 15:43:54.834089 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400509
	
	I1013 15:43:54.834128 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.838142 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.838584 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.838632 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.839006 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:54.839287 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.839492 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:54.839694 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:54.840030 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:54.840291 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:54.840310 1881569 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-400509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-400509/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-400509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 15:43:54.976516 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 15:43:54.976554 1881569 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-1810975/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-1810975/.minikube}
	I1013 15:43:54.976618 1881569 buildroot.go:174] setting up certificates
	I1013 15:43:54.976643 1881569 provision.go:84] configureAuth start
	I1013 15:43:54.976668 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetMachineName
	I1013 15:43:54.977165 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:54.981371 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.981937 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.981969 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.982449 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:54.986173 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.986658 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:54.986687 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:54.986975 1881569 provision.go:143] copyHostCerts
	I1013 15:43:54.987049 1881569 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem, removing ...
	I1013 15:43:54.987072 1881569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem
	I1013 15:43:54.987167 1881569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/cert.pem (1123 bytes)
	I1013 15:43:54.987325 1881569 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem, removing ...
	I1013 15:43:54.987339 1881569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem
	I1013 15:43:54.987386 1881569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/key.pem (1679 bytes)
	I1013 15:43:54.987492 1881569 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem, removing ...
	I1013 15:43:54.987508 1881569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem
	I1013 15:43:54.987563 1881569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.pem (1082 bytes)
	I1013 15:43:54.987652 1881569 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem org=jenkins.newest-cni-400509 san=[127.0.0.1 192.168.39.58 localhost minikube newest-cni-400509]
	I1013 15:43:56.105921 1881569 provision.go:177] copyRemoteCerts
	I1013 15:43:56.105986 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 15:43:56.106012 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.109883 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.110333 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.110378 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.110655 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.110940 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.111126 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.111313 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:56.204900 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 15:43:56.250950 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 15:43:56.289008 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 15:43:56.329429 1881569 provision.go:87] duration metric: took 1.352737429s to configureAuth
	I1013 15:43:56.329473 1881569 buildroot.go:189] setting minikube options for container-runtime
	I1013 15:43:56.329690 1881569 config.go:182] Loaded profile config "newest-cni-400509": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:43:56.329707 1881569 machine.go:96] duration metric: took 1.778889003s to provisionDockerMachine
	I1013 15:43:56.329732 1881569 start.go:293] postStartSetup for "newest-cni-400509" (driver="kvm2")
	I1013 15:43:56.329749 1881569 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 15:43:56.329787 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.330185 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 15:43:56.330228 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.334038 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.334514 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.334549 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.334786 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.335028 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.335223 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.335409 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:56.434835 1881569 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 15:43:56.440734 1881569 info.go:137] Remote host: Buildroot 2025.02
	I1013 15:43:56.440767 1881569 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/addons for local assets ...
	I1013 15:43:56.440835 1881569 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-1810975/.minikube/files for local assets ...
	I1013 15:43:56.440916 1881569 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem -> 18149272.pem in /etc/ssl/certs
	I1013 15:43:56.441040 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 15:43:56.459176 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:43:56.502925 1881569 start.go:296] duration metric: took 173.137045ms for postStartSetup
	I1013 15:43:56.502995 1881569 fix.go:56] duration metric: took 16.870835137s for fixHost
	I1013 15:43:56.503030 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.506452 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.506870 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.506935 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.507108 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.507367 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.507582 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.507785 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.508020 1881569 main.go:141] libmachine: Using SSH client type: native
	I1013 15:43:56.508247 1881569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1013 15:43:56.508261 1881569 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 15:43:56.624915 1881569 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760370236.574388905
	
	I1013 15:43:56.624944 1881569 fix.go:216] guest clock: 1760370236.574388905
	I1013 15:43:56.624957 1881569 fix.go:229] Guest: 2025-10-13 15:43:56.574388905 +0000 UTC Remote: 2025-10-13 15:43:56.50300288 +0000 UTC m=+19.831043931 (delta=71.386025ms)
	I1013 15:43:56.625020 1881569 fix.go:200] guest clock delta is within tolerance: 71.386025ms
	I1013 15:43:56.625030 1881569 start.go:83] releasing machines lock for "newest-cni-400509", held for 16.992897063s
	I1013 15:43:56.625061 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.625392 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:56.628808 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.629195 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.629225 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.629541 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.630278 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.630480 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:43:56.630581 1881569 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 15:43:56.630650 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.630706 1881569 ssh_runner.go:195] Run: cat /version.json
	I1013 15:43:56.630755 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:43:56.635920 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.636466 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.636492 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.636511 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.636805 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.637052 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.637161 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:56.637177 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:56.637345 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.637508 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:56.637592 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:43:56.638223 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:43:56.638488 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:43:56.638658 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:43:53.506025 1881287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:43:53.552445 1881287 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-426789" to be "Ready" ...
	I1013 15:43:53.561765 1881287 node_ready.go:49] node "default-k8s-diff-port-426789" is "Ready"
	I1013 15:43:53.561797 1881287 node_ready.go:38] duration metric: took 9.308209ms for node "default-k8s-diff-port-426789" to be "Ready" ...
	I1013 15:43:53.561815 1881287 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:43:53.561875 1881287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:43:53.620414 1881287 api_server.go:72] duration metric: took 509.358173ms to wait for apiserver process to appear ...
	I1013 15:43:53.620447 1881287 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:43:53.620471 1881287 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8444/healthz ...
	I1013 15:43:53.648031 1881287 api_server.go:279] https://192.168.50.176:8444/healthz returned 200:
	ok
	I1013 15:43:53.650864 1881287 api_server.go:141] control plane version: v1.34.1
	I1013 15:43:53.650897 1881287 api_server.go:131] duration metric: took 30.442085ms to wait for apiserver health ...
	I1013 15:43:53.650909 1881287 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:43:53.673424 1881287 system_pods.go:59] 8 kube-system pods found
	I1013 15:43:53.673472 1881287 system_pods.go:61] "coredns-66bc5c9577-7mm74" [a6965960-a658-468c-a225-0a99e4ee6d29] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:43:53.673485 1881287 system_pods.go:61] "etcd-default-k8s-diff-port-426789" [97d29e80-2aae-46cb-b01c-2c94280cd2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:43:53.673496 1881287 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-426789" [b6f928ae-7bf8-48a8-b3df-251e2c47c935] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:43:53.673507 1881287 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-426789" [fffd4380-39d1-482a-a943-ac4ce7f67a82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:43:53.673518 1881287 system_pods.go:61] "kube-proxy-2vt8l" [1bae3750-c6df-46d8-8b33-130e1773600a] Running
	I1013 15:43:53.673527 1881287 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-426789" [1cf8ece0-4fbc-4ab1-9ec8-d206af58f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:43:53.673540 1881287 system_pods.go:61] "metrics-server-746fcd58dc-mqvqg" [e7582897-ca82-4255-9bc3-8e9563b9e410] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:43:53.673549 1881287 system_pods.go:61] "storage-provisioner" [ff2ac22d-9091-4b0c-b7fd-0c2e3e7c0062] Running
	I1013 15:43:53.673559 1881287 system_pods.go:74] duration metric: took 22.641644ms to wait for pod list to return data ...
	I1013 15:43:53.673573 1881287 default_sa.go:34] waiting for default service account to be created ...
	I1013 15:43:53.685624 1881287 default_sa.go:45] found service account: "default"
	I1013 15:43:53.685669 1881287 default_sa.go:55] duration metric: took 12.081401ms for default service account to be created ...
	I1013 15:43:53.685695 1881287 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 15:43:53.703485 1881287 system_pods.go:86] 8 kube-system pods found
	I1013 15:43:53.703536 1881287 system_pods.go:89] "coredns-66bc5c9577-7mm74" [a6965960-a658-468c-a225-0a99e4ee6d29] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:43:53.703551 1881287 system_pods.go:89] "etcd-default-k8s-diff-port-426789" [97d29e80-2aae-46cb-b01c-2c94280cd2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:43:53.703563 1881287 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-426789" [b6f928ae-7bf8-48a8-b3df-251e2c47c935] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:43:53.703577 1881287 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-426789" [fffd4380-39d1-482a-a943-ac4ce7f67a82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:43:53.703585 1881287 system_pods.go:89] "kube-proxy-2vt8l" [1bae3750-c6df-46d8-8b33-130e1773600a] Running
	I1013 15:43:53.703592 1881287 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-426789" [1cf8ece0-4fbc-4ab1-9ec8-d206af58f90d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:43:53.703602 1881287 system_pods.go:89] "metrics-server-746fcd58dc-mqvqg" [e7582897-ca82-4255-9bc3-8e9563b9e410] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:43:53.703612 1881287 system_pods.go:89] "storage-provisioner" [ff2ac22d-9091-4b0c-b7fd-0c2e3e7c0062] Running
	I1013 15:43:53.703625 1881287 system_pods.go:126] duration metric: took 17.919545ms to wait for k8s-apps to be running ...
	I1013 15:43:53.703639 1881287 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 15:43:53.703708 1881287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 15:43:53.836388 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 15:43:53.847671 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:43:53.859317 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 15:43:53.859351 1881287 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 15:43:53.863118 1881287 system_svc.go:56] duration metric: took 159.468238ms WaitForService to wait for kubelet
	I1013 15:43:53.863156 1881287 kubeadm.go:586] duration metric: took 752.10936ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 15:43:53.863183 1881287 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:43:53.868102 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 15:43:53.868135 1881287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1013 15:43:53.876846 1881287 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:43:53.876881 1881287 node_conditions.go:123] node cpu capacity is 2
	I1013 15:43:53.876895 1881287 node_conditions.go:105] duration metric: took 13.705749ms to run NodePressure ...
	I1013 15:43:53.876911 1881287 start.go:241] waiting for startup goroutines ...
	I1013 15:43:53.975801 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 15:43:53.975837 1881287 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 15:43:54.014372 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 15:43:54.014413 1881287 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 15:43:54.097966 1881287 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:43:54.098001 1881287 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 15:43:54.102029 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 15:43:54.102070 1881287 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 15:43:54.231798 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 15:43:54.231824 1881287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 15:43:54.279938 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:43:54.422682 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 15:43:54.422738 1881287 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 15:43:54.559022 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 15:43:54.559045 1881287 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 15:43:54.673642 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 15:43:54.673671 1881287 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 15:43:54.816125 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 15:43:54.816167 1881287 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 15:43:54.994488 1881287 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:43:54.994521 1881287 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 15:43:55.030337 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.193903867s)
	I1013 15:43:55.030400 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:55.030415 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:55.030809 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:55.030875 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:55.030890 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:55.030903 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:55.030915 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:55.031248 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:55.031256 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:55.031269 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:55.060389 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:55.060423 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:55.060934 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:55.060958 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:55.060959 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:55.140795 1881287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:43:56.965227 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.117511004s)
	I1013 15:43:56.965299 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.965313 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.965682 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.965698 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:56.965701 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.965725 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.965735 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.966055 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.966089 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.982812 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.702823647s)
	I1013 15:43:56.982887 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.982902 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.983290 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.983313 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.983346 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:56.983354 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:56.983623 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:56.983642 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:56.983654 1881287 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-426789"
	I1013 15:43:57.358086 1881287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.217241399s)
	I1013 15:43:57.358160 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:57.358174 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:57.358579 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:57.358599 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:57.358609 1881287 main.go:141] libmachine: Making call to close driver server
	I1013 15:43:57.358631 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) Calling .Close
	I1013 15:43:57.358917 1881287 main.go:141] libmachine: (default-k8s-diff-port-426789) DBG | Closing plugin on server side
	I1013 15:43:57.358932 1881287 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:43:57.358960 1881287 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:43:57.363260 1881287 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-426789 addons enable metrics-server
	
	I1013 15:43:57.365802 1881287 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1013 15:43:57.367317 1881287 addons.go:514] duration metric: took 4.256188456s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1013 15:43:57.367371 1881287 start.go:246] waiting for cluster config update ...
	I1013 15:43:57.367388 1881287 start.go:255] writing updated cluster config ...
	I1013 15:43:57.367791 1881287 ssh_runner.go:195] Run: rm -f paused
	I1013 15:43:57.378391 1881287 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 15:43:57.391148 1881287 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7mm74" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:43:56.747519 1881569 ssh_runner.go:195] Run: systemctl --version
	I1013 15:43:56.754883 1881569 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 15:43:56.762412 1881569 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 15:43:56.762502 1881569 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 15:43:56.786981 1881569 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 15:43:56.787012 1881569 start.go:495] detecting cgroup driver to use...
	I1013 15:43:56.787098 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1013 15:43:56.822198 1881569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1013 15:43:56.844111 1881569 docker.go:218] disabling cri-docker service (if available) ...
	I1013 15:43:56.844200 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 15:43:56.869650 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 15:43:56.890055 1881569 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 15:43:57.069567 1881569 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 15:43:57.320533 1881569 docker.go:234] disabling docker service ...
	I1013 15:43:57.320624 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 15:43:57.340325 1881569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 15:43:57.358343 1881569 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 15:43:57.573206 1881569 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 15:43:57.752872 1881569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 15:43:57.778609 1881569 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 15:43:57.809437 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1013 15:43:57.825120 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1013 15:43:57.841470 1881569 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1013 15:43:57.841551 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1013 15:43:57.858777 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:57.874650 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1013 15:43:57.889338 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1013 15:43:57.905170 1881569 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 15:43:57.921541 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1013 15:43:57.937087 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1013 15:43:57.951733 1881569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1013 15:43:57.967796 1881569 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 15:43:57.981546 1881569 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 15:43:57.981609 1881569 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 15:43:58.008790 1881569 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 15:43:58.024908 1881569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:43:58.218957 1881569 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1013 15:43:58.264961 1881569 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1013 15:43:58.265076 1881569 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:58.271878 1881569 retry.go:31] will retry after 1.359480351s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1013 15:43:59.632478 1881569 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1013 15:43:59.640017 1881569 start.go:563] Will wait 60s for crictl version
	I1013 15:43:59.640109 1881569 ssh_runner.go:195] Run: which crictl
	I1013 15:43:59.646533 1881569 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 15:43:59.704210 1881569 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I1013 15:43:59.704321 1881569 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:59.745848 1881569 ssh_runner.go:195] Run: containerd --version
	I1013 15:43:59.781571 1881569 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.23 ...
	I1013 15:43:59.783056 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetIP
	I1013 15:43:59.787259 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:59.787813 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:43:59.787850 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:43:59.788151 1881569 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 15:43:59.793319 1881569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:43:59.813808 1881569 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 15:43:59.815535 1881569 kubeadm.go:883] updating cluster {Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Sche
duledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 15:43:59.815759 1881569 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1013 15:43:59.815862 1881569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:59.858933 1881569 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:59.858960 1881569 containerd.go:534] Images already preloaded, skipping extraction
	I1013 15:43:59.859025 1881569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 15:43:59.900328 1881569 containerd.go:627] all images are preloaded for containerd runtime.
	I1013 15:43:59.900362 1881569 cache_images.go:85] Images are preloaded, skipping loading
	I1013 15:43:59.900381 1881569 kubeadm.go:934] updating node { 192.168.39.58 8443 v1.34.1 containerd true true} ...
	I1013 15:43:59.900516 1881569 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-400509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 15:43:59.900613 1881569 ssh_runner.go:195] Run: sudo crictl info
	I1013 15:43:59.950762 1881569 cni.go:84] Creating CNI manager for ""
	I1013 15:43:59.950793 1881569 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:43:59.950823 1881569 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 15:43:59.950864 1881569 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-400509 NodeName:newest-cni-400509 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 15:43:59.951043 1881569 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-400509"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 15:43:59.951135 1881569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 15:43:59.967876 1881569 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 15:43:59.967956 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 15:43:59.982916 1881569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1013 15:44:00.010237 1881569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 15:44:00.040144 1881569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1013 15:44:00.066386 1881569 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1013 15:44:00.071339 1881569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 15:44:00.090025 1881569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:44:00.252566 1881569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:44:00.303616 1881569 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509 for IP: 192.168.39.58
	I1013 15:44:00.303643 1881569 certs.go:195] generating shared ca certs ...
	I1013 15:44:00.303666 1881569 certs.go:227] acquiring lock for ca certs: {Name:mkca3ca51f22974142f4a83d808e725ff7c8cd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:00.303875 1881569 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key
	I1013 15:44:00.303956 1881569 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key
	I1013 15:44:00.303979 1881569 certs.go:257] generating profile certs ...
	I1013 15:44:00.304150 1881569 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/client.key
	I1013 15:44:00.304227 1881569 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/apiserver.key.832cd03a
	I1013 15:44:00.304286 1881569 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/proxy-client.key
	I1013 15:44:00.304458 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem (1338 bytes)
	W1013 15:44:00.304508 1881569 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927_empty.pem, impossibly tiny 0 bytes
	I1013 15:44:00.304522 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 15:44:00.304562 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/ca.pem (1082 bytes)
	I1013 15:44:00.304594 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/cert.pem (1123 bytes)
	I1013 15:44:00.304628 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/key.pem (1679 bytes)
	I1013 15:44:00.304681 1881569 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem (1708 bytes)
	I1013 15:44:00.305582 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 15:44:00.349695 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 15:44:00.394423 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 15:44:00.453420 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 15:44:00.500378 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 15:44:00.553138 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 15:44:00.590334 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 15:44:00.630023 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/newest-cni-400509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 15:44:00.668829 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 15:44:00.712223 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/certs/1814927.pem --> /usr/share/ca-certificates/1814927.pem (1338 bytes)
	I1013 15:44:00.752915 1881569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/ssl/certs/18149272.pem --> /usr/share/ca-certificates/18149272.pem (1708 bytes)
	I1013 15:44:00.789877 1881569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 15:44:00.813337 1881569 ssh_runner.go:195] Run: openssl version
	I1013 15:44:00.821230 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1814927.pem && ln -fs /usr/share/ca-certificates/1814927.pem /etc/ssl/certs/1814927.pem"
	I1013 15:44:00.837532 1881569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1814927.pem
	I1013 15:44:00.843842 1881569 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 14:22 /usr/share/ca-certificates/1814927.pem
	I1013 15:44:00.843915 1881569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1814927.pem
	I1013 15:44:00.852403 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1814927.pem /etc/ssl/certs/51391683.0"
	I1013 15:44:00.868962 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18149272.pem && ln -fs /usr/share/ca-certificates/18149272.pem /etc/ssl/certs/18149272.pem"
	I1013 15:44:00.887762 1881569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18149272.pem
	I1013 15:44:00.895478 1881569 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 14:22 /usr/share/ca-certificates/18149272.pem
	I1013 15:44:00.895571 1881569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18149272.pem
	I1013 15:44:00.904610 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18149272.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 15:44:00.921509 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 15:44:00.940954 1881569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:44:00.947541 1881569 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 13:55 /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:44:00.947630 1881569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 15:44:00.956030 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 15:44:00.974527 1881569 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 15:44:00.981332 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 15:44:00.992960 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 15:44:01.004003 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 15:44:01.012671 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 15:44:01.020681 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 15:44:01.028927 1881569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 15:44:01.037647 1881569 kubeadm.go:400] StartCluster: {Name:newest-cni-400509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:newest-cni-400509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedul
edStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 15:44:01.037778 1881569 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1013 15:44:01.037843 1881569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:44:01.097948 1881569 cri.go:89] found id: "1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554"
	I1013 15:44:01.097981 1881569 cri.go:89] found id: "36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675"
	I1013 15:44:01.097988 1881569 cri.go:89] found id: "95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb"
	I1013 15:44:01.097993 1881569 cri.go:89] found id: "2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5"
	I1013 15:44:01.097997 1881569 cri.go:89] found id: "2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1"
	I1013 15:44:01.098002 1881569 cri.go:89] found id: "a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4"
	I1013 15:44:01.098006 1881569 cri.go:89] found id: "94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab"
	I1013 15:44:01.098010 1881569 cri.go:89] found id: "590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c"
	I1013 15:44:01.098014 1881569 cri.go:89] found id: ""
	I1013 15:44:01.098075 1881569 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1013 15:44:01.122443 1881569 kubeadm.go:407] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T15:44:01Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1013 15:44:01.122587 1881569 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 15:44:01.144393 1881569 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 15:44:01.144424 1881569 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 15:44:01.144489 1881569 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 15:44:01.159059 1881569 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 15:44:01.160097 1881569 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-400509" does not appear in /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:44:01.160849 1881569 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-1810975/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-400509" cluster setting kubeconfig missing "newest-cni-400509" context setting]
	I1013 15:44:01.162117 1881569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:01.164324 1881569 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 15:44:01.182868 1881569 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.58
	I1013 15:44:01.182912 1881569 kubeadm.go:1160] stopping kube-system containers ...
	I1013 15:44:01.182929 1881569 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1013 15:44:01.183008 1881569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 15:44:01.236181 1881569 cri.go:89] found id: "1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554"
	I1013 15:44:01.236210 1881569 cri.go:89] found id: "36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675"
	I1013 15:44:01.236217 1881569 cri.go:89] found id: "95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb"
	I1013 15:44:01.236223 1881569 cri.go:89] found id: "2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5"
	I1013 15:44:01.236228 1881569 cri.go:89] found id: "2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1"
	I1013 15:44:01.236233 1881569 cri.go:89] found id: "a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4"
	I1013 15:44:01.236237 1881569 cri.go:89] found id: "94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab"
	I1013 15:44:01.236241 1881569 cri.go:89] found id: "590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c"
	I1013 15:44:01.236245 1881569 cri.go:89] found id: ""
	I1013 15:44:01.236272 1881569 cri.go:252] Stopping containers: [1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554 36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675 95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb 2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5 2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1 a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4 94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab 590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c]
	I1013 15:44:01.236375 1881569 ssh_runner.go:195] Run: which crictl
	I1013 15:44:01.241802 1881569 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 1294edf9edcaea4b965a9625b370280a0f6b8c92764a7fdcd4e924b1032da554 36362b7115f42835919b0943ef261b039e85be969848f5a158113fb6e4694675 95ea2b6cff3d5cfd169a09bf3b5f2fbc2885a64a784235a7c6a61d9bdfe416eb 2cd705e0dcdfa3e0bd6f135cf8d8116cb8354f90b1926328a1712b129a2a69c5 2968a705eea29bcf64703dfeb47fa15b162c4b9c1512df14639224a9a08ddbe1 a10692761a47d8def283a0d2edbee20de040d1656e25dcab7f52395ecae8a9b4 94e330e9e628ff91ed858ae2c4e2bb16315c1adb90f96921f914a2f49c4c28ab 590aac28627cdc81556e8347114e510d2c4b541310d74d07ba33e2dfe76ade6c
	I1013 15:44:01.290389 1881569 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1013 15:44:01.314882 1881569 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 15:44:01.329255 1881569 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 15:44:01.329305 1881569 kubeadm.go:157] found existing configuration files:
	
	I1013 15:44:01.329373 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 15:44:01.341956 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 15:44:01.342028 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 15:44:01.355841 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 15:44:01.368810 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 15:44:01.368903 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 15:44:01.382268 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 15:44:01.396472 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 15:44:01.396552 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 15:44:01.412562 1881569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 15:44:01.426123 1881569 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 15:44:01.426188 1881569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 15:44:01.442585 1881569 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 15:44:01.460493 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:01.611108 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	W1013 15:43:59.400593 1881287 pod_ready.go:104] pod "coredns-66bc5c9577-7mm74" is not "Ready", error: <nil>
	W1013 15:44:01.404013 1881287 pod_ready.go:104] pod "coredns-66bc5c9577-7mm74" is not "Ready", error: <nil>
	I1013 15:44:02.909951 1881287 pod_ready.go:94] pod "coredns-66bc5c9577-7mm74" is "Ready"
	I1013 15:44:02.909990 1881287 pod_ready.go:86] duration metric: took 5.518800662s for pod "coredns-66bc5c9577-7mm74" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:02.913489 1881287 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:02.919647 1881287 pod_ready.go:94] pod "etcd-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:02.919678 1881287 pod_ready.go:86] duration metric: took 6.161871ms for pod "etcd-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:02.928092 1881287 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.438075 1881287 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:04.438113 1881287 pod_ready.go:86] duration metric: took 1.509988538s for pod "kube-apiserver-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.442872 1881287 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.451602 1881287 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:04.451645 1881287 pod_ready.go:86] duration metric: took 8.73711ms for pod "kube-controller-manager-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.497031 1881287 pod_ready.go:83] waiting for pod "kube-proxy-2vt8l" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:04.897578 1881287 pod_ready.go:94] pod "kube-proxy-2vt8l" is "Ready"
	I1013 15:44:04.897618 1881287 pod_ready.go:86] duration metric: took 400.546183ms for pod "kube-proxy-2vt8l" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:05.096440 1881287 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:05.496577 1881287 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-426789" is "Ready"
	I1013 15:44:05.496616 1881287 pod_ready.go:86] duration metric: took 400.135912ms for pod "kube-scheduler-default-k8s-diff-port-426789" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 15:44:05.496664 1881287 pod_ready.go:40] duration metric: took 8.118190331s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 15:44:05.552871 1881287 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 15:44:05.554860 1881287 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-426789" cluster and "default" namespace by default
	I1013 15:44:02.860183 1881569 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.249017124s)
	I1013 15:44:02.860277 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:03.168409 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:03.257048 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:03.348980 1881569 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:44:03.349102 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:03.849619 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:04.350010 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:04.849274 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:04.888091 1881569 api_server.go:72] duration metric: took 1.539128472s to wait for apiserver process to appear ...
	I1013 15:44:04.888128 1881569 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:44:04.888157 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:04.888817 1881569 api_server.go:269] stopped: https://192.168.39.58:8443/healthz: Get "https://192.168.39.58:8443/healthz": dial tcp 192.168.39.58:8443: connect: connection refused
	I1013 15:44:05.388397 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:07.970700 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:44:07.970755 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:44:07.970773 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:08.014873 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 15:44:08.014906 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 15:44:08.388242 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:08.394684 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:08.394733 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:08.888394 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:08.898015 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:08.898049 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:09.388508 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:09.394367 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:09.394400 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:09.888304 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:09.895427 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 15:44:09.895462 1881569 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 15:44:10.389244 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:10.396050 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1013 15:44:10.404568 1881569 api_server.go:141] control plane version: v1.34.1
	I1013 15:44:10.404611 1881569 api_server.go:131] duration metric: took 5.516473663s to wait for apiserver health ...
	I1013 15:44:10.404626 1881569 cni.go:84] Creating CNI manager for ""
	I1013 15:44:10.404634 1881569 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 15:44:10.406752 1881569 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 15:44:10.408371 1881569 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 15:44:10.423786 1881569 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 15:44:10.455726 1881569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:44:10.462697 1881569 system_pods.go:59] 9 kube-system pods found
	I1013 15:44:10.462753 1881569 system_pods.go:61] "coredns-66bc5c9577-bjq5v" [91a9af9a-e41a-4318-81d9-f7d51fe95004] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:10.462769 1881569 system_pods.go:61] "coredns-66bc5c9577-mbvz8" [3bd6fcbc-f1cd-4996-9cc5-af429ec54d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:10.462780 1881569 system_pods.go:61] "etcd-newest-cni-400509" [ea2910a6-f7b1-41c0-89b2-be41f742a959] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:44:10.462790 1881569 system_pods.go:61] "kube-apiserver-newest-cni-400509" [1837ba3d-de07-4dd0-9cb3-0ad36c5da82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:44:10.462802 1881569 system_pods.go:61] "kube-controller-manager-newest-cni-400509" [b38e0595-92d4-4723-a550-02b3567fa410] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:44:10.462808 1881569 system_pods.go:61] "kube-proxy-w5j92" [f2b6880d-90c5-484d-84cc-6f657d38179d] Running
	I1013 15:44:10.462815 1881569 system_pods.go:61] "kube-scheduler-newest-cni-400509" [f55dcdac-6629-48f5-ab8b-fff90f5196aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:44:10.462842 1881569 system_pods.go:61] "metrics-server-746fcd58dc-nnvx9" [836f9d73-0cde-4dea-9bff-f6ac345cadc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:44:10.462847 1881569 system_pods.go:61] "storage-provisioner" [6557f44c-4238-4b21-b5e5-2ef2cb2c554c] Running
	I1013 15:44:10.462855 1881569 system_pods.go:74] duration metric: took 7.102704ms to wait for pod list to return data ...
	I1013 15:44:10.462869 1881569 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:44:10.467505 1881569 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:44:10.467542 1881569 node_conditions.go:123] node cpu capacity is 2
	I1013 15:44:10.467556 1881569 node_conditions.go:105] duration metric: took 4.682317ms to run NodePressure ...
	I1013 15:44:10.467610 1881569 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 15:44:10.762255 1881569 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 15:44:10.780389 1881569 ops.go:34] apiserver oom_adj: -16
	I1013 15:44:10.780421 1881569 kubeadm.go:601] duration metric: took 9.635988482s to restartPrimaryControlPlane
	I1013 15:44:10.780437 1881569 kubeadm.go:402] duration metric: took 9.742806388s to StartCluster
	I1013 15:44:10.780475 1881569 settings.go:142] acquiring lock: {Name:mk62cbb82c41e7be9e5c2abcba73b92b00678893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:10.780589 1881569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:44:10.782504 1881569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-1810975/kubeconfig: {Name:mk475ca44795fc55faf45ddf8ab23f10e3531969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 15:44:10.782808 1881569 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1013 15:44:10.782888 1881569 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 15:44:10.783000 1881569 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-400509"
	I1013 15:44:10.783025 1881569 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-400509"
	W1013 15:44:10.783033 1881569 addons.go:247] addon storage-provisioner should already be in state true
	I1013 15:44:10.783032 1881569 addons.go:69] Setting default-storageclass=true in profile "newest-cni-400509"
	I1013 15:44:10.783057 1881569 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-400509"
	I1013 15:44:10.783065 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	I1013 15:44:10.783066 1881569 addons.go:69] Setting metrics-server=true in profile "newest-cni-400509"
	I1013 15:44:10.783090 1881569 addons.go:69] Setting dashboard=true in profile "newest-cni-400509"
	I1013 15:44:10.783117 1881569 addons.go:238] Setting addon metrics-server=true in "newest-cni-400509"
	I1013 15:44:10.783123 1881569 addons.go:238] Setting addon dashboard=true in "newest-cni-400509"
	W1013 15:44:10.783132 1881569 addons.go:247] addon dashboard should already be in state true
	I1013 15:44:10.783147 1881569 config.go:182] Loaded profile config "newest-cni-400509": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:44:10.783174 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	W1013 15:44:10.783132 1881569 addons.go:247] addon metrics-server should already be in state true
	I1013 15:44:10.783246 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	I1013 15:44:10.783508 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783559 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.783583 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783505 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783614 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.783640 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.783648 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.783670 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.784368 1881569 out.go:179] * Verifying Kubernetes components...
	I1013 15:44:10.785756 1881569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 15:44:10.800271 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37791
	I1013 15:44:10.800271 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I1013 15:44:10.801032 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.801109 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42931
	I1013 15:44:10.801246 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.801506 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.801929 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.801955 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.802056 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.802082 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.802110 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I1013 15:44:10.802430 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.802455 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.802480 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.802460 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.802674 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.803138 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.803158 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.803208 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.803230 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.803443 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.803454 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.803467 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.803920 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.804033 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.804083 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.804124 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.812531 1881569 addons.go:238] Setting addon default-storageclass=true in "newest-cni-400509"
	W1013 15:44:10.812560 1881569 addons.go:247] addon default-storageclass should already be in state true
	I1013 15:44:10.812594 1881569 host.go:66] Checking if "newest-cni-400509" exists ...
	I1013 15:44:10.812997 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.813066 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.820690 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I1013 15:44:10.821988 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.822645 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.822687 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.823210 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.823487 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.827289 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.829099 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I1013 15:44:10.829669 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.829812 1881569 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 15:44:10.830088 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I1013 15:44:10.830259 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.830280 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.830669 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.830818 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.830868 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.831364 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.831385 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.832151 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40743
	I1013 15:44:10.832239 1881569 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 15:44:10.832197 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.832793 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.832793 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.833231 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.833272 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.833297 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.833471 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 15:44:10.833488 1881569 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 15:44:10.833508 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.833970 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.834643 1881569 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1013 15:44:10.834786 1881569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:44:10.834839 1881569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:44:10.835807 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.837731 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.838271 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.838321 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.838595 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.838792 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.838994 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.839128 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:10.839520 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 15:44:10.839547 1881569 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 15:44:10.839574 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.840359 1881569 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 15:44:10.841784 1881569 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:44:10.841804 1881569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 15:44:10.841825 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.844531 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.845501 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.845570 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.845952 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.846206 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.846484 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.846861 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:10.847137 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.847628 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.847850 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.848261 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.848469 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.848657 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.848992 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:10.853772 1881569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I1013 15:44:10.854204 1881569 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:44:10.854681 1881569 main.go:141] libmachine: Using API Version  1
	I1013 15:44:10.854698 1881569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:44:10.855059 1881569 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:44:10.855327 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetState
	I1013 15:44:10.857412 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .DriverName
	I1013 15:44:10.857679 1881569 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 15:44:10.857694 1881569 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 15:44:10.857728 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHHostname
	I1013 15:44:10.861587 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.861994 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:3a:80", ip: ""} in network mk-newest-cni-400509: {Iface:virbr4 ExpiryTime:2025-10-13 16:43:52 +0000 UTC Type:0 Mac:52:54:00:a8:3a:80 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:newest-cni-400509 Clientid:01:52:54:00:a8:3a:80}
	I1013 15:44:10.862021 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | domain newest-cni-400509 has defined IP address 192.168.39.58 and MAC address 52:54:00:a8:3a:80 in network mk-newest-cni-400509
	I1013 15:44:10.862318 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHPort
	I1013 15:44:10.862498 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHKeyPath
	I1013 15:44:10.862640 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .GetSSHUsername
	I1013 15:44:10.862796 1881569 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/newest-cni-400509/id_rsa Username:docker}
	I1013 15:44:11.065604 1881569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 15:44:11.089626 1881569 api_server.go:52] waiting for apiserver process to appear ...
	I1013 15:44:11.089733 1881569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:44:11.110889 1881569 api_server.go:72] duration metric: took 328.043615ms to wait for apiserver process to appear ...
	I1013 15:44:11.110921 1881569 api_server.go:88] waiting for apiserver healthz status ...
	I1013 15:44:11.110945 1881569 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1013 15:44:11.116791 1881569 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1013 15:44:11.117887 1881569 api_server.go:141] control plane version: v1.34.1
	I1013 15:44:11.117919 1881569 api_server.go:131] duration metric: took 6.988921ms to wait for apiserver health ...
	I1013 15:44:11.117931 1881569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 15:44:11.127122 1881569 system_pods.go:59] 9 kube-system pods found
	I1013 15:44:11.127169 1881569 system_pods.go:61] "coredns-66bc5c9577-bjq5v" [91a9af9a-e41a-4318-81d9-f7d51fe95004] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:11.127186 1881569 system_pods.go:61] "coredns-66bc5c9577-mbvz8" [3bd6fcbc-f1cd-4996-9cc5-af429ec54d8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 15:44:11.127195 1881569 system_pods.go:61] "etcd-newest-cni-400509" [ea2910a6-f7b1-41c0-89b2-be41f742a959] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 15:44:11.127208 1881569 system_pods.go:61] "kube-apiserver-newest-cni-400509" [1837ba3d-de07-4dd0-9cb3-0ad36c5da82c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 15:44:11.127214 1881569 system_pods.go:61] "kube-controller-manager-newest-cni-400509" [b38e0595-92d4-4723-a550-02b3567fa410] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 15:44:11.127218 1881569 system_pods.go:61] "kube-proxy-w5j92" [f2b6880d-90c5-484d-84cc-6f657d38179d] Running
	I1013 15:44:11.127223 1881569 system_pods.go:61] "kube-scheduler-newest-cni-400509" [f55dcdac-6629-48f5-ab8b-fff90f5196aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 15:44:11.127228 1881569 system_pods.go:61] "metrics-server-746fcd58dc-nnvx9" [836f9d73-0cde-4dea-9bff-f6ac345cadc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 15:44:11.127231 1881569 system_pods.go:61] "storage-provisioner" [6557f44c-4238-4b21-b5e5-2ef2cb2c554c] Running
	I1013 15:44:11.127241 1881569 system_pods.go:74] duration metric: took 9.299922ms to wait for pod list to return data ...
	I1013 15:44:11.127267 1881569 default_sa.go:34] waiting for default service account to be created ...
	I1013 15:44:11.131642 1881569 default_sa.go:45] found service account: "default"
	I1013 15:44:11.131672 1881569 default_sa.go:55] duration metric: took 4.396286ms for default service account to be created ...
	I1013 15:44:11.131689 1881569 kubeadm.go:586] duration metric: took 348.849317ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 15:44:11.131723 1881569 node_conditions.go:102] verifying NodePressure condition ...
	I1013 15:44:11.135748 1881569 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 15:44:11.135781 1881569 node_conditions.go:123] node cpu capacity is 2
	I1013 15:44:11.135795 1881569 node_conditions.go:105] duration metric: took 4.065136ms to run NodePressure ...
	I1013 15:44:11.135809 1881569 start.go:241] waiting for startup goroutines ...
	I1013 15:44:11.297679 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 15:44:11.297704 1881569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1013 15:44:11.302366 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 15:44:11.302395 1881569 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 15:44:11.328126 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 15:44:11.336312 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 15:44:11.390077 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 15:44:11.390113 1881569 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 15:44:11.401349 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 15:44:11.401380 1881569 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 15:44:11.487081 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 15:44:11.487113 1881569 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 15:44:11.514896 1881569 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:44:11.514927 1881569 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 15:44:11.548697 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 15:44:11.548735 1881569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 15:44:11.576084 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 15:44:11.638992 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 15:44:11.639025 1881569 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 15:44:11.739144 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 15:44:11.739177 1881569 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 15:44:11.851415 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 15:44:11.851451 1881569 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 15:44:11.964190 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 15:44:11.964227 1881569 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 15:44:12.151581 1881569 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:44:12.151616 1881569 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 15:44:12.348324 1881569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 15:44:14.548429 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.212077572s)
	I1013 15:44:14.548509 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.548523 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.548612 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.22045241s)
	I1013 15:44:14.548643 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.548655 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.548889 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.548910 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.548922 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.548931 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.549013 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.549064 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.549083 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.549102 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.549113 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.549247 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.549260 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.549515 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.549546 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.549552 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.590958 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.590989 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.591387 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.591401 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.591419 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.690046 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.113908538s)
	I1013 15:44:14.690105 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.690120 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.690573 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.690605 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.690622 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.690634 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:14.690650 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:14.690904 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:14.690936 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:14.690957 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:14.690981 1881569 addons.go:479] Verifying addon metrics-server=true in "newest-cni-400509"
	I1013 15:44:15.069622 1881569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.721227304s)
	I1013 15:44:15.069689 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:15.069705 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:15.070241 1881569 main.go:141] libmachine: (newest-cni-400509) DBG | Closing plugin on server side
	I1013 15:44:15.070270 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:15.070282 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:15.070295 1881569 main.go:141] libmachine: Making call to close driver server
	I1013 15:44:15.070301 1881569 main.go:141] libmachine: (newest-cni-400509) Calling .Close
	I1013 15:44:15.070572 1881569 main.go:141] libmachine: Successfully made call to close driver server
	I1013 15:44:15.070587 1881569 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 15:44:15.074390 1881569 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-400509 addons enable metrics-server
	
	I1013 15:44:15.076426 1881569 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1013 15:44:15.077979 1881569 addons.go:514] duration metric: took 4.295084518s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1013 15:44:15.078038 1881569 start.go:246] waiting for cluster config update ...
	I1013 15:44:15.078071 1881569 start.go:255] writing updated cluster config ...
	I1013 15:44:15.078443 1881569 ssh_runner.go:195] Run: rm -f paused
	I1013 15:44:15.144611 1881569 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 15:44:15.146748 1881569 out.go:179] * Done! kubectl is now configured to use "newest-cni-400509" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	7996dc307393d       523cad1a4df73       3 minutes ago       Exited              dashboard-metrics-scraper   6                   c1b8962087cfc       dashboard-metrics-scraper-6ffb444bf9-s8jp6
	e713caac79780       6e38f40d628db       8 minutes ago       Running             storage-provisioner         2                   ada7b154122ca       storage-provisioner
	9d3845de8158d       56cc512116c8f       9 minutes ago       Running             busybox                     1                   0bc53ce6693b8       busybox
	de6045fe8b644       52546a367cc9e       9 minutes ago       Running             coredns                     1                   6018c90713177       coredns-66bc5c9577-7mm74
	9afcd220dce5c       6e38f40d628db       9 minutes ago       Exited              storage-provisioner         1                   ada7b154122ca       storage-provisioner
	ffd2fa4c7492a       fc25172553d79       9 minutes ago       Running             kube-proxy                  1                   20743a14589b0       kube-proxy-2vt8l
	2e09d68b0f5af       7dd6aaa1717ab       9 minutes ago       Running             kube-scheduler              1                   a16a940204f01       kube-scheduler-default-k8s-diff-port-426789
	5b4a3be1f05df       5f1f5298c888d       9 minutes ago       Running             etcd                        1                   5489eff705493       etcd-default-k8s-diff-port-426789
	86a5135c54749       c3994bc696102       9 minutes ago       Running             kube-apiserver              1                   866379ba8eb6d       kube-apiserver-default-k8s-diff-port-426789
	86c928953f11f       c80c8dbafe7dd       9 minutes ago       Running             kube-controller-manager     1                   461612656f771       kube-controller-manager-default-k8s-diff-port-426789
	2b7ddbc816fe7       56cc512116c8f       11 minutes ago      Exited              busybox                     0                   ccd71f2cbb3e6       busybox
	23263de730bc8       52546a367cc9e       11 minutes ago      Exited              coredns                     0                   6ad5dd96039c0       coredns-66bc5c9577-7mm74
	5b51fe785fefb       fc25172553d79       11 minutes ago      Exited              kube-proxy                  0                   ad60502d6da58       kube-proxy-2vt8l
	72895cd889d70       5f1f5298c888d       11 minutes ago      Exited              etcd                        0                   576a62b0fd6cb       etcd-default-k8s-diff-port-426789
	f7e912cdcdcaf       c80c8dbafe7dd       11 minutes ago      Exited              kube-controller-manager     0                   4edb4e82778b6       kube-controller-manager-default-k8s-diff-port-426789
	d2ffc106f9c2c       7dd6aaa1717ab       11 minutes ago      Exited              kube-scheduler              0                   94011076ac88a       kube-scheduler-default-k8s-diff-port-426789
	ac49f80c44906       c3994bc696102       11 minutes ago      Exited              kube-apiserver              0                   ca093134f786e       kube-apiserver-default-k8s-diff-port-426789
	
	
	==> containerd <==
	Oct 13 15:46:57 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:46:57.467236345Z" level=info msg="StartContainer for \"8e12b352ff603b0f7eee37c8d044664e900c956a4f3beb04e40399ed1ad1ec7e\""
	Oct 13 15:46:57 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:46:57.550940432Z" level=info msg="StartContainer for \"8e12b352ff603b0f7eee37c8d044664e900c956a4f3beb04e40399ed1ad1ec7e\" returns successfully"
	Oct 13 15:46:57 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:46:57.602632629Z" level=info msg="shim disconnected" id=8e12b352ff603b0f7eee37c8d044664e900c956a4f3beb04e40399ed1ad1ec7e namespace=k8s.io
	Oct 13 15:46:57 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:46:57.602929138Z" level=warning msg="cleaning up after shim disconnected" id=8e12b352ff603b0f7eee37c8d044664e900c956a4f3beb04e40399ed1ad1ec7e namespace=k8s.io
	Oct 13 15:46:57 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:46:57.602949726Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:46:58 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:46:58.510577847Z" level=info msg="RemoveContainer for \"6d4bef84031464a89a4863ee1d1cc523cd3ded623d4e5e05937a6440b71ae9ae\""
	Oct 13 15:46:58 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:46:58.518798192Z" level=info msg="RemoveContainer for \"6d4bef84031464a89a4863ee1d1cc523cd3ded623d4e5e05937a6440b71ae9ae\" returns successfully"
	Oct 13 15:49:34 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:34.435612915Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 13 15:49:34 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:34.439032106Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:49:34 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:34.525195585Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 13 15:49:34 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:34.742165481Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 13 15:49:34 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:34.742251608Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
	Oct 13 15:49:40 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:40.437934220Z" level=info msg="CreateContainer within sandbox \"c1b8962087cfcfc827402bb90c7c0eac68dd18e3aab047cabd714361d37ce418\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,}"
	Oct 13 15:49:40 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:40.466153515Z" level=info msg="CreateContainer within sandbox \"c1b8962087cfcfc827402bb90c7c0eac68dd18e3aab047cabd714361d37ce418\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,} returns container id \"7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89\""
	Oct 13 15:49:40 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:40.467895020Z" level=info msg="StartContainer for \"7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89\""
	Oct 13 15:49:40 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:40.539328870Z" level=info msg="StartContainer for \"7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89\" returns successfully"
	Oct 13 15:49:40 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:40.590008066Z" level=info msg="shim disconnected" id=7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89 namespace=k8s.io
	Oct 13 15:49:40 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:40.590060740Z" level=warning msg="cleaning up after shim disconnected" id=7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89 namespace=k8s.io
	Oct 13 15:49:40 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:40.590073966Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 13 15:49:41 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:41.037531956Z" level=info msg="RemoveContainer for \"8e12b352ff603b0f7eee37c8d044664e900c956a4f3beb04e40399ed1ad1ec7e\""
	Oct 13 15:49:41 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:41.045172411Z" level=info msg="RemoveContainer for \"8e12b352ff603b0f7eee37c8d044664e900c956a4f3beb04e40399ed1ad1ec7e\" returns successfully"
	Oct 13 15:49:44 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:44.435023197Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 13 15:49:44 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:44.438370297Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Oct 13 15:49:44 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:44.440849692Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Oct 13 15:49:44 default-k8s-diff-port-426789 containerd[723]: time="2025-10-13T15:49:44.440931643Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [23263de730bc84a9ea3450c2307b5724b296cec5c1065e29489213bf64118ec3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36648 - 30430 "HINFO IN 7676052730766108135.5286797628239658464. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023729401s
	
	
	==> coredns [de6045fe8b64456d19efb388a7568d8febac73b8c97f17bd8e0eb15e1d15624e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42843 - 32066 "HINFO IN 8302515203416780237.4486557239833603180. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030238082s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-426789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-426789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=default-k8s-diff-port-426789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T15_41_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 15:41:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-426789
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 15:53:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 15:50:08 +0000   Mon, 13 Oct 2025 15:41:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 15:50:08 +0000   Mon, 13 Oct 2025 15:41:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 15:50:08 +0000   Mon, 13 Oct 2025 15:41:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 15:50:08 +0000   Mon, 13 Oct 2025 15:43:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.176
	  Hostname:    default-k8s-diff-port-426789
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 4204e92c5377432a9bb163d826e31270
	  System UUID:                4204e92c-5377-432a-9bb1-63d826e31270
	  Boot ID:                    588422a5-b7ee-4c2d-b867-2dfdd5889e1a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-7mm74                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     11m
	  kube-system                 etcd-default-k8s-diff-port-426789                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         11m
	  kube-system                 kube-apiserver-default-k8s-diff-port-426789             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-426789    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-2vt8l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-default-k8s-diff-port-426789             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-746fcd58dc-mqvqg                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s8jp6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z6wz8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 9m14s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node default-k8s-diff-port-426789 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node default-k8s-diff-port-426789 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node default-k8s-diff-port-426789 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node default-k8s-diff-port-426789 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node default-k8s-diff-port-426789 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node default-k8s-diff-port-426789 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                11m                    kubelet          Node default-k8s-diff-port-426789 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                    node-controller  Node default-k8s-diff-port-426789 event: Registered Node default-k8s-diff-port-426789 in Controller
	  Normal   Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m21s (x8 over 9m21s)  kubelet          Node default-k8s-diff-port-426789 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m21s (x8 over 9m21s)  kubelet          Node default-k8s-diff-port-426789 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m21s (x7 over 9m21s)  kubelet          Node default-k8s-diff-port-426789 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m17s                  kubelet          Node default-k8s-diff-port-426789 has been rebooted, boot id: 588422a5-b7ee-4c2d-b867-2dfdd5889e1a
	  Normal   RegisteredNode           9m13s                  node-controller  Node default-k8s-diff-port-426789 event: Registered Node default-k8s-diff-port-426789 in Controller
	
	
	==> dmesg <==
	[Oct13 15:43] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000045] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000073] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007538] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.828071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.127031] kauditd_printk_skb: 57 callbacks suppressed
	[  +0.109766] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.619409] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.545743] kauditd_printk_skb: 312 callbacks suppressed
	[Oct13 15:44] kauditd_printk_skb: 74 callbacks suppressed
	[ +18.721384] kauditd_printk_skb: 41 callbacks suppressed
	[ +16.001390] kauditd_printk_skb: 7 callbacks suppressed
	[ +15.995456] kauditd_printk_skb: 5 callbacks suppressed
	[Oct13 15:45] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:46] kauditd_printk_skb: 6 callbacks suppressed
	[Oct13 15:49] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [5b4a3be1f05dfa573e97d0c2e51c2cbe90a4d38fe43e4a641303a576e5f7324d] <==
	{"level":"warn","ts":"2025-10-13T15:43:49.709354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.723586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.734252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.746605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.778579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.788764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.796596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.805923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.815522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.821953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.834088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.844995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.853920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.865225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.875913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.888849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.905731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.912155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:49.924446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:43:50.012596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:44:02.577966Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"196.270332ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-7mm74\" limit:1 ","response":"range_response_count:1 size:5807"}
	{"level":"info","ts":"2025-10-13T15:44:02.578096Z","caller":"traceutil/trace.go:172","msg":"trace[1389887952] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-7mm74; range_end:; response_count:1; response_revision:697; }","duration":"196.436196ms","start":"2025-10-13T15:44:02.381641Z","end":"2025-10-13T15:44:02.578078Z","steps":["trace[1389887952] 'agreement among raft nodes before linearized reading'  (duration: 196.120128ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T15:44:02.577416Z","caller":"traceutil/trace.go:172","msg":"trace[40552611] linearizableReadLoop","detail":"{readStateIndex:743; appliedIndex:744; }","duration":"195.617364ms","start":"2025-10-13T15:44:02.381781Z","end":"2025-10-13T15:44:02.577399Z","steps":["trace[40552611] 'read index received'  (duration: 195.611078ms)","trace[40552611] 'applied index is now lower than readState.Index'  (duration: 5.13µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T15:44:02.578599Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.332646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-7mm74\" limit:1 ","response":"range_response_count:1 size:5807"}
	{"level":"info","ts":"2025-10-13T15:44:02.578630Z","caller":"traceutil/trace.go:172","msg":"trace[498056639] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-7mm74; range_end:; response_count:1; response_revision:697; }","duration":"130.372394ms","start":"2025-10-13T15:44:02.448249Z","end":"2025-10-13T15:44:02.578622Z","steps":["trace[498056639] 'agreement among raft nodes before linearized reading'  (duration: 130.249087ms)"],"step_count":1}
	
	
	==> etcd [72895cd889d706c874b68b539b6f600fe1653f8780b81fe725f96794e7f789a2] <==
	{"level":"warn","ts":"2025-10-13T15:41:10.068961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.081351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.091443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.104735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.113054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.122211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.135634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.149539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.157309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.168120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.182381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.190380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.198950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.216608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.217466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.231127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.236549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.245640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.254631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.267379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.273258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.286403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.295908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.305553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T15:41:10.381685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55886","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:53:07 up 9 min,  0 users,  load average: 0.14, 0.15, 0.09
	Linux default-k8s-diff-port-426789 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [86a5135c54749edaafd996704297e568945127e6e29c0a2c5c62cdadd604b0ca] <==
	I1013 15:48:51.893910       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:48:51.894047       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:48:51.894133       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:48:51.895302       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:49:51.894281       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:49:51.894352       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:49:51.894371       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:49:51.895465       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:49:51.895557       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:49:51.895590       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1013 15:51:51.895599       1 handler_proxy.go:99] no RequestInfo found in the context
	W1013 15:51:51.895761       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:51:51.895881       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 15:51:51.895921       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1013 15:51:51.895931       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1013 15:51:51.897907       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ac49f80c449067b6336cb639bd943db15ccbee8de127bba35ebfb13e852dd547] <==
	I1013 15:41:14.057825       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 15:41:14.080930       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 15:41:18.769714       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 15:41:18.776265       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 15:41:18.968306       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 15:41:19.030058       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1013 15:42:08.564426       1 conn.go:339] Error on socket receive: read tcp 192.168.50.176:8444->192.168.50.1:37332: use of closed network connection
	I1013 15:42:09.382000       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1013 15:42:09.397357       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:42:09.397550       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1013 15:42:09.397638       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1013 15:42:09.564423       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.111.145.250"}
	W1013 15:42:09.580943       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:42:09.581313       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1013 15:42:09.593708       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 15:42:09.593758       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [86c928953f11fd4c421e931195a2ff2f3704ed53c587680c8baae719888300b0] <==
	I1013 15:46:54.801490       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:47:24.725901       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:47:24.812049       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:47:54.734221       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:47:54.821739       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:48:24.741509       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:48:24.831848       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:48:54.748894       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:48:54.843055       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:49:24.755353       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:49:24.853628       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:49:54.762601       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:49:54.865854       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:50:24.769563       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:50:24.876806       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:50:54.776543       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:50:54.886488       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:51:24.782528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:51:24.896184       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:51:54.788157       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:51:54.906939       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:52:24.795073       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:52:24.917164       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1013 15:52:54.801943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 15:52:54.926640       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [f7e912cdcdcafb5c19865296b6084050cb314c8b062d8c8adbdb9de39a23e996] <==
	I1013 15:41:18.085406       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 15:41:18.085563       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 15:41:18.098616       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 15:41:18.107866       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-426789" podCIDRs=["10.244.0.0/24"]
	I1013 15:41:18.112560       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 15:41:18.112985       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 15:41:18.113346       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 15:41:18.113411       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 15:41:18.114231       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 15:41:18.113893       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 15:41:18.115619       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 15:41:18.115989       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 15:41:18.116107       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 15:41:18.116761       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 15:41:18.117096       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 15:41:18.118227       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 15:41:18.118430       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 15:41:18.118703       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 15:41:18.118816       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 15:41:18.120228       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 15:41:18.123563       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 15:41:18.124257       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 15:41:18.127876       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 15:41:18.131217       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 15:41:18.132545       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5b51fe785fefb050d96e91fde822c328cd8ead2a0f7976da79e1f6dbde02279c] <==
	I1013 15:41:20.154264       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 15:41:20.255378       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 15:41:20.255510       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.176"]
	E1013 15:41:20.258313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 15:41:20.521344       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 15:41:20.521438       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:41:20.521769       1 server_linux.go:132] "Using iptables Proxier"
	I1013 15:41:20.579437       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 15:41:20.580731       1 server.go:527] "Version info" version="v1.34.1"
	I1013 15:41:20.581794       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:41:20.611549       1 config.go:200] "Starting service config controller"
	I1013 15:41:20.624796       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 15:41:20.620004       1 config.go:106] "Starting endpoint slice config controller"
	I1013 15:41:20.632063       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 15:41:20.620036       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 15:41:20.632082       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 15:41:20.632099       1 config.go:309] "Starting node config controller"
	I1013 15:41:20.632103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 15:41:20.732899       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 15:41:20.732965       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 15:41:20.733029       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 15:41:20.733979       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [ffd2fa4c7492a57b652c4ab1b970f827ad4edd431b29cd5ca8b3ce59d973cf61] <==
	I1013 15:43:52.539780       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 15:43:52.642000       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 15:43:52.642055       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.176"]
	E1013 15:43:52.642187       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 15:43:52.741014       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 15:43:52.741091       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 15:43:52.741119       1 server_linux.go:132] "Using iptables Proxier"
	I1013 15:43:52.767352       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 15:43:52.768621       1 server.go:527] "Version info" version="v1.34.1"
	I1013 15:43:52.770966       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:43:52.796397       1 config.go:200] "Starting service config controller"
	I1013 15:43:52.796424       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 15:43:52.796459       1 config.go:106] "Starting endpoint slice config controller"
	I1013 15:43:52.796467       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 15:43:52.796486       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 15:43:52.796492       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 15:43:52.799201       1 config.go:309] "Starting node config controller"
	I1013 15:43:52.803196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 15:43:52.803213       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 15:43:52.897012       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 15:43:52.898546       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 15:43:52.899186       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2e09d68b0f5af7335d8ba3da1ecacabd8c23a329ee55e5957aefc17d704710b9] <==
	I1013 15:43:48.985640       1 serving.go:386] Generated self-signed cert in-memory
	W1013 15:43:50.748715       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 15:43:50.748774       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 15:43:50.748787       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 15:43:50.748793       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 15:43:50.883178       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 15:43:50.883539       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 15:43:50.889736       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:43:50.889796       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 15:43:50.897237       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 15:43:50.897499       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 15:43:50.991208       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [d2ffc106f9c2c4d059c2afcb8d29bdf8ad69a66949a72c22462e0769dda93929] <==
	E1013 15:41:11.157250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 15:41:11.157311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 15:41:11.157326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 15:41:11.157472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 15:41:11.157511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 15:41:11.157080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 15:41:11.159340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 15:41:11.159837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 15:41:11.160131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 15:41:11.964817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 15:41:11.964819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 15:41:11.994983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 15:41:12.007659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 15:41:12.078104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 15:41:12.101839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 15:41:12.116551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 15:41:12.137121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 15:41:12.152016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 15:41:12.183657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 15:41:12.251027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 15:41:12.337502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 15:41:12.377754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 15:41:12.458524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 15:41:12.498336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1013 15:41:14.239798       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 15:51:43 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:51:43.433706    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z6wz8" podUID="c1d2745a-8b1e-4dd7-878e-d4822a3f956d"
	Oct 13 15:51:51 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:51:51.433628    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mqvqg" podUID="e7582897-ca82-4255-9bc3-8e9563b9e410"
	Oct 13 15:51:56 default-k8s-diff-port-426789 kubelet[1052]: I1013 15:51:56.432949    1052 scope.go:117] "RemoveContainer" containerID="7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89"
	Oct 13 15:51:56 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:51:56.433133    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s8jp6_kubernetes-dashboard(126ceb20-6840-477d-b3fc-6f5485678613)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s8jp6" podUID="126ceb20-6840-477d-b3fc-6f5485678613"
	Oct 13 15:51:57 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:51:57.434445    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z6wz8" podUID="c1d2745a-8b1e-4dd7-878e-d4822a3f956d"
	Oct 13 15:52:03 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:03.434488    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mqvqg" podUID="e7582897-ca82-4255-9bc3-8e9563b9e410"
	Oct 13 15:52:09 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:09.434894    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z6wz8" podUID="c1d2745a-8b1e-4dd7-878e-d4822a3f956d"
	Oct 13 15:52:10 default-k8s-diff-port-426789 kubelet[1052]: I1013 15:52:10.436145    1052 scope.go:117] "RemoveContainer" containerID="7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89"
	Oct 13 15:52:10 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:10.436316    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s8jp6_kubernetes-dashboard(126ceb20-6840-477d-b3fc-6f5485678613)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s8jp6" podUID="126ceb20-6840-477d-b3fc-6f5485678613"
	Oct 13 15:52:18 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:18.434178    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mqvqg" podUID="e7582897-ca82-4255-9bc3-8e9563b9e410"
	Oct 13 15:52:23 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:23.434503    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z6wz8" podUID="c1d2745a-8b1e-4dd7-878e-d4822a3f956d"
	Oct 13 15:52:25 default-k8s-diff-port-426789 kubelet[1052]: I1013 15:52:25.433117    1052 scope.go:117] "RemoveContainer" containerID="7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89"
	Oct 13 15:52:25 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:25.433354    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s8jp6_kubernetes-dashboard(126ceb20-6840-477d-b3fc-6f5485678613)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s8jp6" podUID="126ceb20-6840-477d-b3fc-6f5485678613"
	Oct 13 15:52:30 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:30.435203    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mqvqg" podUID="e7582897-ca82-4255-9bc3-8e9563b9e410"
	Oct 13 15:52:38 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:38.435285    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z6wz8" podUID="c1d2745a-8b1e-4dd7-878e-d4822a3f956d"
	Oct 13 15:52:39 default-k8s-diff-port-426789 kubelet[1052]: I1013 15:52:39.433109    1052 scope.go:117] "RemoveContainer" containerID="7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89"
	Oct 13 15:52:39 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:39.433369    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s8jp6_kubernetes-dashboard(126ceb20-6840-477d-b3fc-6f5485678613)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s8jp6" podUID="126ceb20-6840-477d-b3fc-6f5485678613"
	Oct 13 15:52:45 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:45.434542    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mqvqg" podUID="e7582897-ca82-4255-9bc3-8e9563b9e410"
	Oct 13 15:52:49 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:49.434569    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z6wz8" podUID="c1d2745a-8b1e-4dd7-878e-d4822a3f956d"
	Oct 13 15:52:51 default-k8s-diff-port-426789 kubelet[1052]: I1013 15:52:51.433383    1052 scope.go:117] "RemoveContainer" containerID="7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89"
	Oct 13 15:52:51 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:51.433921    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s8jp6_kubernetes-dashboard(126ceb20-6840-477d-b3fc-6f5485678613)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s8jp6" podUID="126ceb20-6840-477d-b3fc-6f5485678613"
	Oct 13 15:52:57 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:52:57.433902    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-mqvqg" podUID="e7582897-ca82-4255-9bc3-8e9563b9e410"
	Oct 13 15:53:01 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:53:01.434863    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z6wz8" podUID="c1d2745a-8b1e-4dd7-878e-d4822a3f956d"
	Oct 13 15:53:05 default-k8s-diff-port-426789 kubelet[1052]: I1013 15:53:05.433957    1052 scope.go:117] "RemoveContainer" containerID="7996dc307393db30b4253addea72b1d85064715f481aaee8a1ddd36c97f5fe89"
	Oct 13 15:53:05 default-k8s-diff-port-426789 kubelet[1052]: E1013 15:53:05.434152    1052 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s8jp6_kubernetes-dashboard(126ceb20-6840-477d-b3fc-6f5485678613)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s8jp6" podUID="126ceb20-6840-477d-b3fc-6f5485678613"
	
	
	==> storage-provisioner [9afcd220dce5c4d3e78a3cb200cde3a93983f9db5b4b0444fe179f994a155387] <==
	I1013 15:43:52.339539       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 15:44:22.350403       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e713caac797800c64912a9879f9618f1757e72e670b852ad4416f8aa6c985ac8] <==
	W1013 15:52:43.902486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:45.906315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:45.913472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:47.917733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:47.927840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:49.931707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:49.936499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:51.941466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:51.952085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:53.957514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:53.963160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:55.967144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:55.977514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:57.982277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:57.989392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:52:59.995152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:53:00.001000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:53:02.005527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:53:02.011240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:53:04.015422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:53:04.024483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:53:06.028727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:53:06.035544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:53:08.040572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 15:53:08.048351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-426789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-mqvqg kubernetes-dashboard-855c9754f9-z6wz8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-426789 describe pod metrics-server-746fcd58dc-mqvqg kubernetes-dashboard-855c9754f9-z6wz8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-426789 describe pod metrics-server-746fcd58dc-mqvqg kubernetes-dashboard-855c9754f9-z6wz8: exit status 1 (62.841884ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-mqvqg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-z6wz8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-426789 describe pod metrics-server-746fcd58dc-mqvqg kubernetes-dashboard-855c9754f9-z6wz8: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.02s)

                                                
                                    

Test pass (264/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.2
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.52
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.68
22 TestOffline 84.07
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 419.93
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 9.56
36 TestAddons/parallel/RegistryCreds 0.69
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 7.18
42 TestAddons/parallel/Headlamp 29
43 TestAddons/parallel/CloudSpanner 6.62
45 TestAddons/parallel/NvidiaDevicePlugin 6.54
48 TestAddons/StoppedEnableDisable 83.22
49 TestCertOptions 65.31
50 TestCertExpiration 292.68
52 TestForceSystemdFlag 64.91
53 TestForceSystemdEnv 85.9
55 TestKVMDriverInstallOrUpdate 0.7
59 TestErrorSpam/setup 43.71
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.85
62 TestErrorSpam/pause 1.71
63 TestErrorSpam/unpause 1.93
64 TestErrorSpam/stop 5.67
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 77.29
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 50.47
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.06
76 TestFunctional/serial/CacheCmd/cache/add_local 1.01
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 41.05
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.57
87 TestFunctional/serial/LogsFileCmd 1.61
88 TestFunctional/serial/InvalidService 4.12
90 TestFunctional/parallel/ConfigCmd 0.36
92 TestFunctional/parallel/DryRun 0.28
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.85
99 TestFunctional/parallel/AddonsCmd 0.15
102 TestFunctional/parallel/SSHCmd 0.43
103 TestFunctional/parallel/CpCmd 1.37
105 TestFunctional/parallel/FileSync 0.23
106 TestFunctional/parallel/CertSync 1.4
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
114 TestFunctional/parallel/License 0.3
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
127 TestFunctional/parallel/MountCmd/any-port 6.46
128 TestFunctional/parallel/MountCmd/specific-port 1.7
129 TestFunctional/parallel/MountCmd/VerifyCleanup 1.23
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
134 TestFunctional/parallel/ImageCommands/ImageBuild 3.51
135 TestFunctional/parallel/ImageCommands/Setup 0.43
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.22
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.37
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
145 TestFunctional/parallel/ProfileCmd/profile_list 0.35
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
147 TestFunctional/parallel/Version/short 0.06
148 TestFunctional/parallel/Version/components 0.63
149 TestFunctional/parallel/ServiceCmd/List 1.26
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.25
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 217.77
162 TestMultiControlPlane/serial/DeployApp 5.68
163 TestMultiControlPlane/serial/PingHostFromPods 1.32
164 TestMultiControlPlane/serial/AddWorkerNode 50.12
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
167 TestMultiControlPlane/serial/CopyFile 13.81
168 TestMultiControlPlane/serial/StopSecondaryNode 86.98
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
170 TestMultiControlPlane/serial/RestartSecondaryNode 26.5
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.06
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 365.86
173 TestMultiControlPlane/serial/DeleteSecondaryNode 8.16
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
175 TestMultiControlPlane/serial/StopCluster 247.78
176 TestMultiControlPlane/serial/RestartCluster 101.44
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
178 TestMultiControlPlane/serial/AddSecondaryNode 76.41
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.95
183 TestJSONOutput/start/Command 82.55
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.77
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.67
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 84.77
215 TestMountStart/serial/StartWithMountFirst 22.04
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 24.46
218 TestMountStart/serial/VerifyMountSecond 0.39
219 TestMountStart/serial/DeleteFirst 0.61
220 TestMountStart/serial/VerifyMountPostDelete 0.4
221 TestMountStart/serial/Stop 1.23
222 TestMountStart/serial/RestartStopped 20.1
223 TestMountStart/serial/VerifyMountPostStop 0.4
226 TestMultiNode/serial/FreshStart2Nodes 104.6
227 TestMultiNode/serial/DeployApp2Nodes 3.96
228 TestMultiNode/serial/PingHostFrom2Pods 0.83
229 TestMultiNode/serial/AddNode 42.05
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.62
232 TestMultiNode/serial/CopyFile 7.8
233 TestMultiNode/serial/StopNode 2.35
234 TestMultiNode/serial/StartAfterStop 35.42
235 TestMultiNode/serial/RestartKeepsNodes 342.77
236 TestMultiNode/serial/DeleteNode 2.26
237 TestMultiNode/serial/StopMultiNode 173.27
238 TestMultiNode/serial/RestartMultiNode 79.84
239 TestMultiNode/serial/ValidateNameConflict 41.72
244 TestPreload 120.12
246 TestScheduledStopUnix 112.7
250 TestRunningBinaryUpgrade 121.1
252 TestKubernetesUpgrade 124.71
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 108.64
264 TestNetworkPlugins/group/false 3.93
268 TestNoKubernetes/serial/StartWithStopK8s 55.63
269 TestNoKubernetes/serial/Start 33.5
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
271 TestNoKubernetes/serial/ProfileList 22.09
272 TestNoKubernetes/serial/Stop 1.41
273 TestStoppedBinaryUpgrade/Setup 0.43
274 TestStoppedBinaryUpgrade/Upgrade 114.22
275 TestNoKubernetes/serial/StartNoArgs 38.99
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
285 TestPause/serial/Start 87.38
286 TestNetworkPlugins/group/auto/Start 68.87
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.39
288 TestNetworkPlugins/group/kindnet/Start 65.8
289 TestNetworkPlugins/group/auto/KubeletFlags 0.23
290 TestNetworkPlugins/group/auto/NetCatPod 10.32
291 TestPause/serial/SecondStartNoReconfiguration 67.59
292 TestNetworkPlugins/group/auto/DNS 0.22
293 TestNetworkPlugins/group/auto/Localhost 0.15
294 TestNetworkPlugins/group/auto/HairPin 0.12
296 TestNetworkPlugins/group/custom-flannel/Start 83.8
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
299 TestNetworkPlugins/group/kindnet/NetCatPod 10.92
300 TestPause/serial/Pause 0.82
301 TestPause/serial/VerifyStatus 0.32
302 TestPause/serial/Unpause 0.87
303 TestPause/serial/PauseAgain 0.95
304 TestPause/serial/DeletePaused 0.79
305 TestPause/serial/VerifyDeletedResources 4.09
306 TestNetworkPlugins/group/kindnet/DNS 0.2
307 TestNetworkPlugins/group/kindnet/Localhost 0.16
308 TestNetworkPlugins/group/kindnet/HairPin 0.16
309 TestNetworkPlugins/group/enable-default-cni/Start 85.14
310 TestNetworkPlugins/group/flannel/Start 87.07
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.33
313 TestNetworkPlugins/group/custom-flannel/DNS 0.18
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
316 TestNetworkPlugins/group/bridge/Start 84.75
317 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
318 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.35
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
324 TestNetworkPlugins/group/flannel/NetCatPod 10.29
326 TestStartStop/group/old-k8s-version/serial/FirstStart 98.24
327 TestNetworkPlugins/group/flannel/DNS 0.2
328 TestNetworkPlugins/group/flannel/Localhost 0.15
329 TestNetworkPlugins/group/flannel/HairPin 0.14
331 TestStartStop/group/no-preload/serial/FirstStart 83.47
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
333 TestNetworkPlugins/group/bridge/NetCatPod 10.33
334 TestNetworkPlugins/group/bridge/DNS 0.18
335 TestNetworkPlugins/group/bridge/Localhost 0.14
336 TestNetworkPlugins/group/bridge/HairPin 0.14
338 TestStartStop/group/embed-certs/serial/FirstStart 90.8
339 TestStartStop/group/old-k8s-version/serial/DeployApp 9.37
340 TestStartStop/group/no-preload/serial/DeployApp 8.35
341 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.41
342 TestStartStop/group/old-k8s-version/serial/Stop 89.5
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
344 TestStartStop/group/no-preload/serial/Stop 87.17
345 TestStartStop/group/embed-certs/serial/DeployApp 9.3
346 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
347 TestStartStop/group/embed-certs/serial/Stop 83.61
348 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
349 TestStartStop/group/old-k8s-version/serial/SecondStart 42.59
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
351 TestStartStop/group/no-preload/serial/SecondStart 55.27
353 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
354 TestStartStop/group/embed-certs/serial/SecondStart 45.77
358 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.33
359 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 124.08
362 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.3
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
364 TestStartStop/group/default-k8s-diff-port/serial/Stop 73.54
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
366 TestStartStop/group/old-k8s-version/serial/Pause 2.94
368 TestStartStop/group/newest-cni/serial/FirstStart 49.88
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
370 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 42.49
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.37
373 TestStartStop/group/newest-cni/serial/Stop 2.38
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
375 TestStartStop/group/newest-cni/serial/SecondStart 38.87
377 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
380 TestStartStop/group/newest-cni/serial/Pause 3.13
381 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
382 TestStartStop/group/no-preload/serial/Pause 2.85
383 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
384 TestStartStop/group/embed-certs/serial/Pause 2.82
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 102.08
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.87
x
+
TestDownloadOnly/v1.28.0/json-events (6.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-130651 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-130651 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (6.195300023s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1013 13:55:15.295824 1814927 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1013 13:55:15.295936 1814927 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-130651
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-130651: exit status 85 (67.312226ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                      ARGS                                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-130651 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 13:55:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 13:55:09.144398 1814939 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:55:09.144651 1814939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:09.144659 1814939 out.go:374] Setting ErrFile to fd 2...
	I1013 13:55:09.144663 1814939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:09.144911 1814939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	W1013 13:55:09.145045 1814939 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21724-1810975/.minikube/config/config.json: open /home/jenkins/minikube-integration/21724-1810975/.minikube/config/config.json: no such file or directory
	I1013 13:55:09.145539 1814939 out.go:368] Setting JSON to true
	I1013 13:55:09.146507 1814939 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":20257,"bootTime":1760343452,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:55:09.146611 1814939 start.go:141] virtualization: kvm guest
	I1013 13:55:09.148905 1814939 out.go:99] [download-only-130651] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1013 13:55:09.149063 1814939 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball: no such file or directory
	I1013 13:55:09.149128 1814939 notify.go:220] Checking for updates...
	I1013 13:55:09.150434 1814939 out.go:171] MINIKUBE_LOCATION=21724
	I1013 13:55:09.151998 1814939 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:55:09.153407 1814939 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:55:09.154785 1814939 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:09.156210 1814939 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1013 13:55:09.158547 1814939 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1013 13:55:09.158874 1814939 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 13:55:09.191166 1814939 out.go:99] Using the kvm2 driver based on user configuration
	I1013 13:55:09.191202 1814939 start.go:305] selected driver: kvm2
	I1013 13:55:09.191208 1814939 start.go:925] validating driver "kvm2" against <nil>
	I1013 13:55:09.191581 1814939 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:09.191668 1814939 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:09.206188 1814939 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:09.206221 1814939 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-1810975/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 13:55:09.220780 1814939 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 13:55:09.220828 1814939 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 13:55:09.221617 1814939 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1013 13:55:09.221843 1814939 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 13:55:09.221891 1814939 cni.go:84] Creating CNI manager for ""
	I1013 13:55:09.221961 1814939 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1013 13:55:09.221977 1814939 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 13:55:09.222038 1814939 start.go:349] cluster config:
	{Name:download-only-130651 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-130651 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 13:55:09.222285 1814939 iso.go:125] acquiring lock: {Name:mka16c67d576cb4895cf08a3c34fc1f49ca4adc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 13:55:09.224209 1814939 out.go:99] Downloading VM boot image ...
	I1013 13:55:09.224259 1814939 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 13:55:12.231322 1814939 out.go:99] Starting "download-only-130651" primary control-plane node in "download-only-130651" cluster
	I1013 13:55:12.231358 1814939 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1013 13:55:12.251205 1814939 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1013 13:55:12.251242 1814939 cache.go:58] Caching tarball of preloaded images
	I1013 13:55:12.251415 1814939 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1013 13:55:12.253465 1814939 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1013 13:55:12.253489 1814939 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1013 13:55:12.278238 1814939 preload.go:290] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1013 13:55:12.278372 1814939 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-130651 host does not exist
	  To start a cluster, run: "minikube start -p download-only-130651"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-130651
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-459703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-459703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (3.523241484s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1013 13:55:19.186240 1814927 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1013 13:55:19.186302 1814927 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-1810975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-459703
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-459703: exit status 85 (68.621865ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                      ARGS                                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-130651 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                           │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ delete  │ -p download-only-130651                                                                                                                                                                                         │ download-only-130651 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │ 13 Oct 25 13:55 UTC │
	│ start   │ -o=json --download-only -p download-only-459703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false │ download-only-459703 │ jenkins │ v1.37.0 │ 13 Oct 25 13:55 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 13:55:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 13:55:15.706731 1815147 out.go:360] Setting OutFile to fd 1 ...
	I1013 13:55:15.707053 1815147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:15.707064 1815147 out.go:374] Setting ErrFile to fd 2...
	I1013 13:55:15.707069 1815147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 13:55:15.707263 1815147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 13:55:15.707780 1815147 out.go:368] Setting JSON to true
	I1013 13:55:15.708754 1815147 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":20264,"bootTime":1760343452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 13:55:15.708856 1815147 start.go:141] virtualization: kvm guest
	I1013 13:55:15.710897 1815147 out.go:99] [download-only-459703] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 13:55:15.711120 1815147 notify.go:220] Checking for updates...
	I1013 13:55:15.712546 1815147 out.go:171] MINIKUBE_LOCATION=21724
	I1013 13:55:15.714198 1815147 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 13:55:15.715650 1815147 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 13:55:15.717148 1815147 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 13:55:15.718771 1815147 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-459703 host does not exist
	  To start a cluster, run: "minikube start -p download-only-459703"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-459703
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1013 13:55:19.845708 1814927 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-039949 --alsologtostderr --binary-mirror http://127.0.0.1:46019 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-039949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-039949
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (84.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-432359 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-432359 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m23.222178857s)
helpers_test.go:175: Cleaning up "offline-containerd-432359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-432359
--- PASS: TestOffline (84.07s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-214022
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-214022: exit status 85 (58.385428ms)

                                                
                                                
-- stdout --
	* Profile "addons-214022" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-214022"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-214022
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-214022: exit status 85 (59.043497ms)

                                                
                                                
-- stdout --
	* Profile "addons-214022" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-214022"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (419.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-214022 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-214022 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (6m59.926349985s)
--- PASS: TestAddons/Setup (419.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-214022 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-214022 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-214022 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-214022 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7048dd64-49df-4427-b467-30bed2944d3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7048dd64-49df-4427-b467-30bed2944d3e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004491896s
addons_test.go:694: (dbg) Run:  kubectl --context addons-214022 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-214022 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-214022 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.929482ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-214022
addons_test.go:332: (dbg) Run:  kubectl --context addons-214022 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-lrthv" [e0510fdb-dc82-40c5-8514-5832cf5b5ddb] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.009020962s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.18s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 12.225004ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-wlkcr" [ab18753b-f64b-4e39-81de-1c8f9f935cfd] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004600455s
addons_test.go:463: (dbg) Run:  kubectl --context addons-214022 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 addons disable metrics-server --alsologtostderr -v=1: (1.097029764s)
--- PASS: TestAddons/parallel/MetricsServer (7.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-214022 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-214022 --alsologtostderr -v=1: (1.193940026s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-vlpb8" [a144caff-85c1-4010-bfc6-0434e2888bdb] Pending
helpers_test.go:352: "headlamp-6945c6f4d-vlpb8" [a144caff-85c1-4010-bfc6-0434e2888bdb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-vlpb8" [a144caff-85c1-4010-bfc6-0434e2888bdb] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-vlpb8" [a144caff-85c1-4010-bfc6-0434e2888bdb] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.00405122s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-214022 addons disable headlamp --alsologtostderr -v=1: (5.804520154s)
--- PASS: TestAddons/parallel/Headlamp (29.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-whp5m" [9b91e21c-d6cb-471b-a78c-4e45d05990cb] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005329519s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-v4lvw" [06fb9add-b929-4b88-b3c5-e67537d22798] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003990626s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-214022 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (83.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-214022
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-214022: (1m22.916099934s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-214022
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-214022
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-214022
--- PASS: TestAddons/StoppedEnableDisable (83.22s)

                                                
                                    
x
+
TestCertOptions (65.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-740924 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-740924 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m3.96228392s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-740924 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-740924 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-740924 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-740924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-740924
--- PASS: TestCertOptions (65.31s)

                                                
                                    
x
+
TestCertExpiration (292.68s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-407214 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-407214 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m4.769679746s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-407214 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-407214 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (47.220180466s)
helpers_test.go:175: Cleaning up "cert-expiration-407214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-407214
--- PASS: TestCertExpiration (292.68s)

                                                
                                    
x
+
TestForceSystemdFlag (64.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-755046 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-755046 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m3.919745648s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-755046 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-755046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-755046
--- PASS: TestForceSystemdFlag (64.91s)

                                                
                                    
x
+
TestForceSystemdEnv (85.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-509862 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-509862 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m24.824692153s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-509862 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-509862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-509862
--- PASS: TestForceSystemdEnv (85.90s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.7s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1013 15:20:10.898661 1814927 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1013 15:20:10.899014 1814927 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate559734059/001:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1013 15:20:10.933087 1814927 install.go:163] /tmp/TestKVMDriverInstallOrUpdate559734059/001/docker-machine-driver-kvm2 version is 1.1.1
W1013 15:20:10.933156 1814927 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1013 15:20:10.933330 1814927 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1013 15:20:10.933390 1814927 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate559734059/001/docker-machine-driver-kvm2
I1013 15:20:11.453961 1814927 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate559734059/001:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1013 15:20:11.469583 1814927 install.go:163] /tmp/TestKVMDriverInstallOrUpdate559734059/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.70s)

                                                
                                    
x
+
TestErrorSpam/setup (43.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-322009 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-322009 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-322009 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-322009 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (43.705696363s)
--- PASS: TestErrorSpam/setup (43.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (5.67s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 stop: (1.860771423s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 stop: (1.924595547s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-322009 --log_dir /tmp/nospam-322009 stop: (1.87982549s)
--- PASS: TestErrorSpam/stop (5.67s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21724-1810975/.minikube/files/etc/test/nested/copy/1814927/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.29s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-608191 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1013 14:22:20.522315 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:20.528727 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:20.540086 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:20.561485 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:20.603025 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:20.684462 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:20.845970 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:21.167772 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:21.809965 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:23.092323 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:25.655339 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:30.776972 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:22:41.018480 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:23:01.500571 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-608191 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m17.288359113s)
--- PASS: TestFunctional/serial/StartWithProxy (77.29s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1013 14:23:20.779234 1814927 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-608191 --alsologtostderr -v=8
E1013 14:23:42.462562 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-608191 --alsologtostderr -v=8: (50.466416117s)
functional_test.go:678: soft start took 50.467257259s for "functional-608191" cluster.
I1013 14:24:11.246083 1814927 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (50.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-608191 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 cache add registry.k8s.io/pause:3.1: (1.013651411s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 cache add registry.k8s.io/pause:3.3: (1.049288964s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-608191 /tmp/TestFunctionalserialCacheCmdcacheadd_local201223403/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 cache add minikube-local-cache-test:functional-608191
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 cache delete minikube-local-cache-test:functional-608191
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-608191
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (234.143016ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 kubectl -- --context functional-608191 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-608191 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-608191 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-608191 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.051646782s)
functional_test.go:776: restart took 41.051774358s for "functional-608191" cluster.
I1013 14:24:58.836488 1814927 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-608191 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 logs: (1.572371223s)
--- PASS: TestFunctional/serial/LogsCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 logs --file /tmp/TestFunctionalserialLogsFileCmd1350524877/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 logs --file /tmp/TestFunctionalserialLogsFileCmd1350524877/001/logs.txt: (1.611589829s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-608191 apply -f testdata/invalidsvc.yaml
E1013 14:25:04.385959 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-608191
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-608191: exit status 115 (300.706482ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.10:31192 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-608191 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 config get cpus: exit status 14 (53.459612ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 config get cpus: exit status 14 (57.661969ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-608191 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-608191 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 23 (139.794596ms)

                                                
                                                
-- stdout --
	* [functional-608191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 14:31:19.153108 1831914 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:31:19.153208 1831914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:31:19.153216 1831914 out.go:374] Setting ErrFile to fd 2...
	I1013 14:31:19.153220 1831914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:31:19.153410 1831914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 14:31:19.153881 1831914 out.go:368] Setting JSON to false
	I1013 14:31:19.154844 1831914 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":22427,"bootTime":1760343452,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 14:31:19.154952 1831914 start.go:141] virtualization: kvm guest
	I1013 14:31:19.156675 1831914 out.go:179] * [functional-608191] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 14:31:19.157980 1831914 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 14:31:19.157996 1831914 notify.go:220] Checking for updates...
	I1013 14:31:19.160640 1831914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 14:31:19.161790 1831914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 14:31:19.162875 1831914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 14:31:19.164474 1831914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 14:31:19.166195 1831914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 14:31:19.168133 1831914 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 14:31:19.168772 1831914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:31:19.168879 1831914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:31:19.183466 1831914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I1013 14:31:19.184041 1831914 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:31:19.184651 1831914 main.go:141] libmachine: Using API Version  1
	I1013 14:31:19.184693 1831914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:31:19.185330 1831914 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:31:19.185603 1831914 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:31:19.185898 1831914 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 14:31:19.186212 1831914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:31:19.186264 1831914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:31:19.200760 1831914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I1013 14:31:19.201204 1831914 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:31:19.201709 1831914 main.go:141] libmachine: Using API Version  1
	I1013 14:31:19.201751 1831914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:31:19.202107 1831914 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:31:19.202321 1831914 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:31:19.234292 1831914 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 14:31:19.235437 1831914 start.go:305] selected driver: kvm2
	I1013 14:31:19.235458 1831914 start.go:925] validating driver "kvm2" against &{Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:31:19.235562 1831914 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 14:31:19.237601 1831914 out.go:203] 
	W1013 14:31:19.238820 1831914 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1013 14:31:19.239958 1831914 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-608191 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-608191 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-608191 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 23 (142.659928ms)

                                                
                                                
-- stdout --
	* [functional-608191] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 14:31:18.166753 1831797 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:31:18.166866 1831797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:31:18.166871 1831797 out.go:374] Setting ErrFile to fd 2...
	I1013 14:31:18.166875 1831797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:31:18.167182 1831797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 14:31:18.167624 1831797 out.go:368] Setting JSON to false
	I1013 14:31:18.168650 1831797 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":22426,"bootTime":1760343452,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 14:31:18.168759 1831797 start.go:141] virtualization: kvm guest
	I1013 14:31:18.170695 1831797 out.go:179] * [functional-608191] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1013 14:31:18.171977 1831797 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 14:31:18.172001 1831797 notify.go:220] Checking for updates...
	I1013 14:31:18.174082 1831797 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 14:31:18.175545 1831797 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 14:31:18.177185 1831797 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 14:31:18.178508 1831797 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 14:31:18.179899 1831797 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 14:31:18.181555 1831797 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 14:31:18.181986 1831797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:31:18.182101 1831797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:31:18.196678 1831797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I1013 14:31:18.197245 1831797 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:31:18.197844 1831797 main.go:141] libmachine: Using API Version  1
	I1013 14:31:18.197877 1831797 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:31:18.198195 1831797 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:31:18.198500 1831797 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:31:18.198865 1831797 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 14:31:18.199320 1831797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:31:18.199365 1831797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:31:18.213919 1831797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I1013 14:31:18.214452 1831797 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:31:18.215017 1831797 main.go:141] libmachine: Using API Version  1
	I1013 14:31:18.215045 1831797 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:31:18.215422 1831797 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:31:18.215688 1831797 main.go:141] libmachine: (functional-608191) Calling .DriverName
	I1013 14:31:18.247322 1831797 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1013 14:31:18.248608 1831797 start.go:305] selected driver: kvm2
	I1013 14:31:18.248627 1831797 start.go:925] validating driver "kvm2" against &{Name:functional-608191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-608191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 14:31:18.248777 1831797 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 14:31:18.251065 1831797 out.go:203] 
	W1013 14:31:18.252400 1831797 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1013 14:31:18.253652 1831797 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh -n functional-608191 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 cp functional-608191:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd956932058/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh -n functional-608191 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh -n functional-608191 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1814927/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo cat /etc/test/nested/copy/1814927/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1814927.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo cat /etc/ssl/certs/1814927.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1814927.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo cat /usr/share/ca-certificates/1814927.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/18149272.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo cat /etc/ssl/certs/18149272.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/18149272.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo cat /usr/share/ca-certificates/18149272.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-608191 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 ssh "sudo systemctl is-active docker": exit status 1 (229.254987ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 ssh "sudo systemctl is-active crio": exit status 1 (237.146255ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdany-port2065470821/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760365508263350239" to /tmp/TestFunctionalparallelMountCmdany-port2065470821/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760365508263350239" to /tmp/TestFunctionalparallelMountCmdany-port2065470821/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760365508263350239" to /tmp/TestFunctionalparallelMountCmdany-port2065470821/001/test-1760365508263350239
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.023233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 14:25:08.482826 1814927 retry.go:31] will retry after 405.753709ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 13 14:25 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 13 14:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 13 14:25 test-1760365508263350239
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh cat /mount-9p/test-1760365508263350239
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-608191 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [4e6e8eda-9ef9-4daf-a8ea-a688468ceb70] Pending
helpers_test.go:352: "busybox-mount" [4e6e8eda-9ef9-4daf-a8ea-a688468ceb70] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [4e6e8eda-9ef9-4daf-a8ea-a688468ceb70] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [4e6e8eda-9ef9-4daf-a8ea-a688468ceb70] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003734472s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-608191 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdany-port2065470821/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdspecific-port931733410/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.046087ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 14:25:14.952743 1814927 retry.go:31] will retry after 435.662214ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdspecific-port931733410/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 ssh "sudo umount -f /mount-9p": exit status 1 (211.148477ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-608191 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdspecific-port931733410/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T" /mount1: exit status 1 (229.258809ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 14:25:16.658762 1814927 retry.go:31] will retry after 328.967302ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-608191 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-608191 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2773280341/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-608191 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-608191
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-608191
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-608191 image ls --format short --alsologtostderr:
I1013 14:35:10.306416 1833147 out.go:360] Setting OutFile to fd 1 ...
I1013 14:35:10.306805 1833147 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:35:10.306817 1833147 out.go:374] Setting ErrFile to fd 2...
I1013 14:35:10.306821 1833147 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:35:10.307121 1833147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
I1013 14:35:10.307783 1833147 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:35:10.307885 1833147 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:35:10.308316 1833147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:35:10.308368 1833147 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:35:10.323121 1833147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36093
I1013 14:35:10.323798 1833147 main.go:141] libmachine: () Calling .GetVersion
I1013 14:35:10.324419 1833147 main.go:141] libmachine: Using API Version  1
I1013 14:35:10.324445 1833147 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:35:10.324805 1833147 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:35:10.325050 1833147 main.go:141] libmachine: (functional-608191) Calling .GetState
I1013 14:35:10.327420 1833147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:35:10.327481 1833147 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:35:10.341456 1833147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39671
I1013 14:35:10.341992 1833147 main.go:141] libmachine: () Calling .GetVersion
I1013 14:35:10.342441 1833147 main.go:141] libmachine: Using API Version  1
I1013 14:35:10.342464 1833147 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:35:10.342899 1833147 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:35:10.343200 1833147 main.go:141] libmachine: (functional-608191) Calling .DriverName
I1013 14:35:10.343440 1833147 ssh_runner.go:195] Run: systemctl --version
I1013 14:35:10.343478 1833147 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
I1013 14:35:10.347183 1833147 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:35:10.347760 1833147 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
I1013 14:35:10.347797 1833147 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:35:10.348029 1833147 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
I1013 14:35:10.348276 1833147 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
I1013 14:35:10.348519 1833147 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
I1013 14:35:10.348737 1833147 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
I1013 14:35:10.450514 1833147 ssh_runner.go:195] Run: sudo crictl images --output json
I1013 14:35:10.506622 1833147 main.go:141] libmachine: Making call to close driver server
I1013 14:35:10.506636 1833147 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:35:10.507017 1833147 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:35:10.507036 1833147 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:35:10.507049 1833147 main.go:141] libmachine: Making call to close driver server
I1013 14:35:10.507057 1833147 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:35:10.507359 1833147 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:35:10.507380 1833147 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:35:10.507446 1833147 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-608191 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ docker.io/kicbase/echo-server               │ functional-608191  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-608191  │ sha256:742887 │ 991B   │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-608191 image ls --format table --alsologtostderr:
I1013 14:35:12.356320 1833321 out.go:360] Setting OutFile to fd 1 ...
I1013 14:35:12.356580 1833321 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:35:12.356590 1833321 out.go:374] Setting ErrFile to fd 2...
I1013 14:35:12.356593 1833321 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:35:12.356809 1833321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
I1013 14:35:12.357387 1833321 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:35:12.357474 1833321 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:35:12.357911 1833321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:35:12.357986 1833321 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:35:12.372605 1833321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39395
I1013 14:35:12.373167 1833321 main.go:141] libmachine: () Calling .GetVersion
I1013 14:35:12.373917 1833321 main.go:141] libmachine: Using API Version  1
I1013 14:35:12.373956 1833321 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:35:12.374459 1833321 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:35:12.374741 1833321 main.go:141] libmachine: (functional-608191) Calling .GetState
I1013 14:35:12.377267 1833321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:35:12.377328 1833321 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:35:12.392511 1833321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37287
I1013 14:35:12.393002 1833321 main.go:141] libmachine: () Calling .GetVersion
I1013 14:35:12.393547 1833321 main.go:141] libmachine: Using API Version  1
I1013 14:35:12.393581 1833321 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:35:12.393978 1833321 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:35:12.394243 1833321 main.go:141] libmachine: (functional-608191) Calling .DriverName
I1013 14:35:12.394477 1833321 ssh_runner.go:195] Run: systemctl --version
I1013 14:35:12.394516 1833321 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
I1013 14:35:12.397630 1833321 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:35:12.398103 1833321 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
I1013 14:35:12.398132 1833321 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:35:12.398429 1833321 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
I1013 14:35:12.398653 1833321 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
I1013 14:35:12.398819 1833321 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
I1013 14:35:12.398990 1833321 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
I1013 14:35:12.490675 1833321 ssh_runner.go:195] Run: sudo crictl images --output json
I1013 14:35:12.549898 1833321 main.go:141] libmachine: Making call to close driver server
I1013 14:35:12.549926 1833321 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:35:12.550327 1833321 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:35:12.550347 1833321 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:35:12.550357 1833321 main.go:141] libmachine: Making call to close driver server
I1013 14:35:12.550376 1833321 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:35:12.550694 1833321 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:35:12.550710 1833321 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:35:12.550761 1833321 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-608191 image ls --format json --alsologtostderr:
[{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e97
4b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-608191"],"size":"2372971"},{"id":"sha256:742887dd7ba96da2f9650b178b39dbd0bb6a2af9dda066d362765244741db6e3","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-608191"],"size":"991"},{"id":"sha256:52546a367
cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","rep
oDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-608191 image ls --format json --alsologtostderr:
I1013 14:35:12.117058 1833297 out.go:360] Setting OutFile to fd 1 ...
I1013 14:35:12.117155 1833297 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:35:12.117162 1833297 out.go:374] Setting ErrFile to fd 2...
I1013 14:35:12.117166 1833297 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:35:12.117362 1833297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
I1013 14:35:12.117924 1833297 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:35:12.118015 1833297 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:35:12.118394 1833297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:35:12.118455 1833297 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:35:12.132961 1833297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
I1013 14:35:12.133760 1833297 main.go:141] libmachine: () Calling .GetVersion
I1013 14:35:12.134390 1833297 main.go:141] libmachine: Using API Version  1
I1013 14:35:12.134441 1833297 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:35:12.134865 1833297 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:35:12.135162 1833297 main.go:141] libmachine: (functional-608191) Calling .GetState
I1013 14:35:12.137610 1833297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:35:12.137657 1833297 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:35:12.152026 1833297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
I1013 14:35:12.152489 1833297 main.go:141] libmachine: () Calling .GetVersion
I1013 14:35:12.152926 1833297 main.go:141] libmachine: Using API Version  1
I1013 14:35:12.152952 1833297 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:35:12.153403 1833297 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:35:12.153702 1833297 main.go:141] libmachine: (functional-608191) Calling .DriverName
I1013 14:35:12.154011 1833297 ssh_runner.go:195] Run: systemctl --version
I1013 14:35:12.154047 1833297 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
I1013 14:35:12.157432 1833297 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:35:12.158059 1833297 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
I1013 14:35:12.158096 1833297 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:35:12.158288 1833297 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
I1013 14:35:12.158464 1833297 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
I1013 14:35:12.158709 1833297 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
I1013 14:35:12.158888 1833297 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
I1013 14:35:12.244679 1833297 ssh_runner.go:195] Run: sudo crictl images --output json
I1013 14:35:12.300322 1833297 main.go:141] libmachine: Making call to close driver server
I1013 14:35:12.300344 1833297 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:35:12.300699 1833297 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:35:12.300739 1833297 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
I1013 14:35:12.300741 1833297 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:35:12.300765 1833297 main.go:141] libmachine: Making call to close driver server
I1013 14:35:12.300773 1833297 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:35:12.301119 1833297 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:35:12.301146 1833297 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:35:12.301144 1833297 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-608191 image ls --format yaml --alsologtostderr:
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-608191
size: "2372971"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:742887dd7ba96da2f9650b178b39dbd0bb6a2af9dda066d362765244741db6e3
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-608191
size: "991"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-608191 image ls --format yaml --alsologtostderr:
I1013 14:35:10.569045 1833171 out.go:360] Setting OutFile to fd 1 ...
I1013 14:35:10.569328 1833171 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:35:10.569338 1833171 out.go:374] Setting ErrFile to fd 2...
I1013 14:35:10.569345 1833171 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:35:10.569556 1833171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
I1013 14:35:10.570239 1833171 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:35:10.570359 1833171 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:35:10.570798 1833171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:35:10.570881 1833171 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:35:10.586029 1833171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
I1013 14:35:10.586624 1833171 main.go:141] libmachine: () Calling .GetVersion
I1013 14:35:10.587367 1833171 main.go:141] libmachine: Using API Version  1
I1013 14:35:10.587402 1833171 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:35:10.588003 1833171 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:35:10.588334 1833171 main.go:141] libmachine: (functional-608191) Calling .GetState
I1013 14:35:10.590893 1833171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:35:10.590942 1833171 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:35:10.605867 1833171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
I1013 14:35:10.606543 1833171 main.go:141] libmachine: () Calling .GetVersion
I1013 14:35:10.607164 1833171 main.go:141] libmachine: Using API Version  1
I1013 14:35:10.607204 1833171 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:35:10.607620 1833171 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:35:10.607894 1833171 main.go:141] libmachine: (functional-608191) Calling .DriverName
I1013 14:35:10.608109 1833171 ssh_runner.go:195] Run: systemctl --version
I1013 14:35:10.608136 1833171 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
I1013 14:35:10.611425 1833171 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:35:10.611951 1833171 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
I1013 14:35:10.611986 1833171 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:35:10.612146 1833171 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
I1013 14:35:10.612337 1833171 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
I1013 14:35:10.612491 1833171 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
I1013 14:35:10.612670 1833171 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
I1013 14:35:10.703199 1833171 ssh_runner.go:195] Run: sudo crictl images --output json
I1013 14:35:10.758864 1833171 main.go:141] libmachine: Making call to close driver server
I1013 14:35:10.758878 1833171 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:35:10.759188 1833171 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:35:10.759242 1833171 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
I1013 14:35:10.759265 1833171 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:35:10.759274 1833171 main.go:141] libmachine: Making call to close driver server
I1013 14:35:10.759282 1833171 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:35:10.759561 1833171 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:35:10.759580 1833171 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:35:10.759617 1833171 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-608191 ssh pgrep buildkitd: exit status 1 (230.468877ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image build -t localhost/my-image:functional-608191 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 image build -t localhost/my-image:functional-608191 testdata/build --alsologtostderr: (3.055054607s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-608191 image build -t localhost/my-image:functional-608191 testdata/build --alsologtostderr:
I1013 14:35:11.047686 1833223 out.go:360] Setting OutFile to fd 1 ...
I1013 14:35:11.047999 1833223 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:35:11.048008 1833223 out.go:374] Setting ErrFile to fd 2...
I1013 14:35:11.048012 1833223 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 14:35:11.048238 1833223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
I1013 14:35:11.048886 1833223 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:35:11.049640 1833223 config.go:182] Loaded profile config "functional-608191": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1013 14:35:11.050034 1833223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:35:11.050078 1833223 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:35:11.064526 1833223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
I1013 14:35:11.065107 1833223 main.go:141] libmachine: () Calling .GetVersion
I1013 14:35:11.065748 1833223 main.go:141] libmachine: Using API Version  1
I1013 14:35:11.065780 1833223 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:35:11.066204 1833223 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:35:11.066423 1833223 main.go:141] libmachine: (functional-608191) Calling .GetState
I1013 14:35:11.068452 1833223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1013 14:35:11.068501 1833223 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 14:35:11.082662 1833223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
I1013 14:35:11.083416 1833223 main.go:141] libmachine: () Calling .GetVersion
I1013 14:35:11.084147 1833223 main.go:141] libmachine: Using API Version  1
I1013 14:35:11.084190 1833223 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 14:35:11.084672 1833223 main.go:141] libmachine: () Calling .GetMachineName
I1013 14:35:11.084935 1833223 main.go:141] libmachine: (functional-608191) Calling .DriverName
I1013 14:35:11.085233 1833223 ssh_runner.go:195] Run: systemctl --version
I1013 14:35:11.085264 1833223 main.go:141] libmachine: (functional-608191) Calling .GetSSHHostname
I1013 14:35:11.089108 1833223 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:35:11.089721 1833223 main.go:141] libmachine: (functional-608191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:73:71", ip: ""} in network mk-functional-608191: {Iface:virbr1 ExpiryTime:2025-10-13 15:22:19 +0000 UTC Type:0 Mac:52:54:00:c4:73:71 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:functional-608191 Clientid:01:52:54:00:c4:73:71}
I1013 14:35:11.089769 1833223 main.go:141] libmachine: (functional-608191) DBG | domain functional-608191 has defined IP address 192.168.39.10 and MAC address 52:54:00:c4:73:71 in network mk-functional-608191
I1013 14:35:11.089996 1833223 main.go:141] libmachine: (functional-608191) Calling .GetSSHPort
I1013 14:35:11.090215 1833223 main.go:141] libmachine: (functional-608191) Calling .GetSSHKeyPath
I1013 14:35:11.090405 1833223 main.go:141] libmachine: (functional-608191) Calling .GetSSHUsername
I1013 14:35:11.090657 1833223 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/functional-608191/id_rsa Username:docker}
I1013 14:35:11.182173 1833223 build_images.go:161] Building image from path: /tmp/build.3959470859.tar
I1013 14:35:11.182257 1833223 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1013 14:35:11.200581 1833223 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3959470859.tar
I1013 14:35:11.208138 1833223 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3959470859.tar: stat -c "%s %y" /var/lib/minikube/build/build.3959470859.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3959470859.tar': No such file or directory
I1013 14:35:11.208182 1833223 ssh_runner.go:362] scp /tmp/build.3959470859.tar --> /var/lib/minikube/build/build.3959470859.tar (3072 bytes)
I1013 14:35:11.252219 1833223 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3959470859
I1013 14:35:11.266518 1833223 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3959470859 -xf /var/lib/minikube/build/build.3959470859.tar
I1013 14:35:11.291218 1833223 containerd.go:394] Building image: /var/lib/minikube/build/build.3959470859
I1013 14:35:11.291302 1833223 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3959470859 --local dockerfile=/var/lib/minikube/build/build.3959470859 --output type=image,name=localhost/my-image:functional-608191
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:d6066ef8d2731a4a92304f8c561a9469f30c64baa640da097d9628dbf2467419
#8 exporting manifest sha256:d6066ef8d2731a4a92304f8c561a9469f30c64baa640da097d9628dbf2467419 0.0s done
#8 exporting config sha256:9f825e5366b6d8792c2a58f5847aa69236470164c3674eba132d82f985a6c2ca 0.0s done
#8 naming to localhost/my-image:functional-608191 done
#8 DONE 0.2s
I1013 14:35:14.010876 1833223 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3959470859 --local dockerfile=/var/lib/minikube/build/build.3959470859 --output type=image,name=localhost/my-image:functional-608191: (2.719533792s)
I1013 14:35:14.010991 1833223 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3959470859
I1013 14:35:14.028491 1833223 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3959470859.tar
I1013 14:35:14.044307 1833223 build_images.go:217] Built localhost/my-image:functional-608191 from /tmp/build.3959470859.tar
I1013 14:35:14.044356 1833223 build_images.go:133] succeeded building to: functional-608191
I1013 14:35:14.044362 1833223 build_images.go:134] failed building to: 
I1013 14:35:14.044437 1833223 main.go:141] libmachine: Making call to close driver server
I1013 14:35:14.044456 1833223 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:35:14.044861 1833223 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:35:14.044887 1833223 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 14:35:14.044897 1833223 main.go:141] libmachine: Making call to close driver server
I1013 14:35:14.044903 1833223 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
I1013 14:35:14.044908 1833223 main.go:141] libmachine: (functional-608191) Calling .Close
I1013 14:35:14.045210 1833223 main.go:141] libmachine: (functional-608191) DBG | Closing plugin on server side
I1013 14:35:14.045226 1833223 main.go:141] libmachine: Successfully made call to close driver server
I1013 14:35:14.045240 1833223 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-608191
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr: (1.125642165s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-608191
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image load --daemon kicbase/echo-server:functional-608191 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image save kicbase/echo-server:functional-608191 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image rm kicbase/echo-server:functional-608191 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-608191
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 image save --daemon kicbase/echo-server:functional-608191 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-608191
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "293.876575ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "54.533604ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "294.038823ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "51.171479ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 service list: (1.260397177s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-608191 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-608191 service list -o json: (1.252915753s)
functional_test.go:1504: Took "1.253029526s" to run "out/minikube-linux-amd64 -p functional-608191 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-608191
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-608191
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-608191
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (217.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1013 14:37:20.514913 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:38:43.589957 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (3m37.03408666s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (217.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 kubectl -- rollout status deployment/busybox: (3.358121141s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-78nx7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-cql85 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-qnhgr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-78nx7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-cql85 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-qnhgr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-78nx7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-cql85 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-qnhgr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
E1013 14:40:06.217623 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:40:06.224064 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:40:06.235486 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:40:06.257086 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:40:06.298992 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-78nx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1013 14:40:06.381006 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-78nx7 -- sh -c "ping -c 1 192.168.39.1"
E1013 14:40:06.543201 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-cql85 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1013 14:40:06.865228 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-cql85 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-qnhgr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 kubectl -- exec busybox-7b57f96db7-qnhgr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 node add --alsologtostderr -v 5
E1013 14:40:07.506927 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:40:08.789322 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:40:11.351138 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:40:16.473011 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:40:26.714582 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:40:47.195949 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 node add --alsologtostderr -v 5: (49.167837199s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-632232 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp testdata/cp-test.txt ha-632232:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3926927098/001/cp-test_ha-632232.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232:/home/docker/cp-test.txt ha-632232-m02:/home/docker/cp-test_ha-632232_ha-632232-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m02 "sudo cat /home/docker/cp-test_ha-632232_ha-632232-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232:/home/docker/cp-test.txt ha-632232-m03:/home/docker/cp-test_ha-632232_ha-632232-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m03 "sudo cat /home/docker/cp-test_ha-632232_ha-632232-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232:/home/docker/cp-test.txt ha-632232-m04:/home/docker/cp-test_ha-632232_ha-632232-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m04 "sudo cat /home/docker/cp-test_ha-632232_ha-632232-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp testdata/cp-test.txt ha-632232-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3926927098/001/cp-test_ha-632232-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m02:/home/docker/cp-test.txt ha-632232:/home/docker/cp-test_ha-632232-m02_ha-632232.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232 "sudo cat /home/docker/cp-test_ha-632232-m02_ha-632232.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m02:/home/docker/cp-test.txt ha-632232-m03:/home/docker/cp-test_ha-632232-m02_ha-632232-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m03 "sudo cat /home/docker/cp-test_ha-632232-m02_ha-632232-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m02:/home/docker/cp-test.txt ha-632232-m04:/home/docker/cp-test_ha-632232-m02_ha-632232-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m04 "sudo cat /home/docker/cp-test_ha-632232-m02_ha-632232-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp testdata/cp-test.txt ha-632232-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3926927098/001/cp-test_ha-632232-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m03:/home/docker/cp-test.txt ha-632232:/home/docker/cp-test_ha-632232-m03_ha-632232.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232 "sudo cat /home/docker/cp-test_ha-632232-m03_ha-632232.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m03:/home/docker/cp-test.txt ha-632232-m02:/home/docker/cp-test_ha-632232-m03_ha-632232-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m02 "sudo cat /home/docker/cp-test_ha-632232-m03_ha-632232-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m03:/home/docker/cp-test.txt ha-632232-m04:/home/docker/cp-test_ha-632232-m03_ha-632232-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m04 "sudo cat /home/docker/cp-test_ha-632232-m03_ha-632232-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp testdata/cp-test.txt ha-632232-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3926927098/001/cp-test_ha-632232-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m04:/home/docker/cp-test.txt ha-632232:/home/docker/cp-test_ha-632232-m04_ha-632232.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232 "sudo cat /home/docker/cp-test_ha-632232-m04_ha-632232.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m04:/home/docker/cp-test.txt ha-632232-m02:/home/docker/cp-test_ha-632232-m04_ha-632232-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m02 "sudo cat /home/docker/cp-test_ha-632232-m04_ha-632232-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 cp ha-632232-m04:/home/docker/cp-test.txt ha-632232-m03:/home/docker/cp-test_ha-632232-m04_ha-632232-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 ssh -n ha-632232-m03 "sudo cat /home/docker/cp-test_ha-632232-m04_ha-632232-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (86.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 node stop m02 --alsologtostderr -v 5
E1013 14:41:28.157482 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:42:20.514228 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 node stop m02 --alsologtostderr -v 5: (1m26.253996801s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5: exit status 7 (726.978235ms)

                                                
                                                
-- stdout --
	ha-632232
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-632232-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-632232-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-632232-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 14:42:38.738124 1838546 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:42:38.738430 1838546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:42:38.738440 1838546 out.go:374] Setting ErrFile to fd 2...
	I1013 14:42:38.738445 1838546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:42:38.738708 1838546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 14:42:38.738958 1838546 out.go:368] Setting JSON to false
	I1013 14:42:38.739001 1838546 mustload.go:65] Loading cluster: ha-632232
	I1013 14:42:38.739092 1838546 notify.go:220] Checking for updates...
	I1013 14:42:38.739456 1838546 config.go:182] Loaded profile config "ha-632232": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 14:42:38.739475 1838546 status.go:174] checking status of ha-632232 ...
	I1013 14:42:38.739954 1838546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:42:38.740006 1838546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:42:38.762358 1838546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I1013 14:42:38.763099 1838546 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:42:38.763739 1838546 main.go:141] libmachine: Using API Version  1
	I1013 14:42:38.763765 1838546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:42:38.764287 1838546 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:42:38.764643 1838546 main.go:141] libmachine: (ha-632232) Calling .GetState
	I1013 14:42:38.767022 1838546 status.go:371] ha-632232 host status = "Running" (err=<nil>)
	I1013 14:42:38.767039 1838546 host.go:66] Checking if "ha-632232" exists ...
	I1013 14:42:38.767401 1838546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:42:38.767449 1838546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:42:38.782637 1838546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34753
	I1013 14:42:38.783219 1838546 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:42:38.783813 1838546 main.go:141] libmachine: Using API Version  1
	I1013 14:42:38.783846 1838546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:42:38.784288 1838546 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:42:38.784528 1838546 main.go:141] libmachine: (ha-632232) Calling .GetIP
	I1013 14:42:38.788903 1838546 main.go:141] libmachine: (ha-632232) DBG | domain ha-632232 has defined MAC address 52:54:00:7f:4b:02 in network mk-ha-632232
	I1013 14:42:38.789516 1838546 main.go:141] libmachine: (ha-632232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:4b:02", ip: ""} in network mk-ha-632232: {Iface:virbr1 ExpiryTime:2025-10-13 15:36:38 +0000 UTC Type:0 Mac:52:54:00:7f:4b:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-632232 Clientid:01:52:54:00:7f:4b:02}
	I1013 14:42:38.789565 1838546 main.go:141] libmachine: (ha-632232) DBG | domain ha-632232 has defined IP address 192.168.39.78 and MAC address 52:54:00:7f:4b:02 in network mk-ha-632232
	I1013 14:42:38.789729 1838546 host.go:66] Checking if "ha-632232" exists ...
	I1013 14:42:38.790223 1838546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:42:38.790275 1838546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:42:38.804374 1838546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38043
	I1013 14:42:38.804841 1838546 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:42:38.805402 1838546 main.go:141] libmachine: Using API Version  1
	I1013 14:42:38.805443 1838546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:42:38.805861 1838546 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:42:38.806091 1838546 main.go:141] libmachine: (ha-632232) Calling .DriverName
	I1013 14:42:38.806344 1838546 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 14:42:38.806378 1838546 main.go:141] libmachine: (ha-632232) Calling .GetSSHHostname
	I1013 14:42:38.810270 1838546 main.go:141] libmachine: (ha-632232) DBG | domain ha-632232 has defined MAC address 52:54:00:7f:4b:02 in network mk-ha-632232
	I1013 14:42:38.810968 1838546 main.go:141] libmachine: (ha-632232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:4b:02", ip: ""} in network mk-ha-632232: {Iface:virbr1 ExpiryTime:2025-10-13 15:36:38 +0000 UTC Type:0 Mac:52:54:00:7f:4b:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-632232 Clientid:01:52:54:00:7f:4b:02}
	I1013 14:42:38.811053 1838546 main.go:141] libmachine: (ha-632232) DBG | domain ha-632232 has defined IP address 192.168.39.78 and MAC address 52:54:00:7f:4b:02 in network mk-ha-632232
	I1013 14:42:38.811305 1838546 main.go:141] libmachine: (ha-632232) Calling .GetSSHPort
	I1013 14:42:38.811525 1838546 main.go:141] libmachine: (ha-632232) Calling .GetSSHKeyPath
	I1013 14:42:38.811744 1838546 main.go:141] libmachine: (ha-632232) Calling .GetSSHUsername
	I1013 14:42:38.811964 1838546 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/ha-632232/id_rsa Username:docker}
	I1013 14:42:38.903937 1838546 ssh_runner.go:195] Run: systemctl --version
	I1013 14:42:38.912083 1838546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 14:42:38.935917 1838546 kubeconfig.go:125] found "ha-632232" server: "https://192.168.39.254:8443"
	I1013 14:42:38.935983 1838546 api_server.go:166] Checking apiserver status ...
	I1013 14:42:38.936055 1838546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 14:42:38.958899 1838546 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1500/cgroup
	W1013 14:42:38.979023 1838546 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1500/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 14:42:38.979090 1838546 ssh_runner.go:195] Run: ls
	I1013 14:42:38.986120 1838546 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1013 14:42:38.992172 1838546 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1013 14:42:38.992207 1838546 status.go:463] ha-632232 apiserver status = Running (err=<nil>)
	I1013 14:42:38.992219 1838546 status.go:176] ha-632232 status: &{Name:ha-632232 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 14:42:38.992239 1838546 status.go:174] checking status of ha-632232-m02 ...
	I1013 14:42:38.992566 1838546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:42:38.992607 1838546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:42:39.008135 1838546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42329
	I1013 14:42:39.008789 1838546 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:42:39.009464 1838546 main.go:141] libmachine: Using API Version  1
	I1013 14:42:39.009495 1838546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:42:39.009838 1838546 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:42:39.010161 1838546 main.go:141] libmachine: (ha-632232-m02) Calling .GetState
	I1013 14:42:39.012261 1838546 status.go:371] ha-632232-m02 host status = "Stopped" (err=<nil>)
	I1013 14:42:39.012282 1838546 status.go:384] host is not running, skipping remaining checks
	I1013 14:42:39.012289 1838546 status.go:176] ha-632232-m02 status: &{Name:ha-632232-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 14:42:39.012313 1838546 status.go:174] checking status of ha-632232-m03 ...
	I1013 14:42:39.012781 1838546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:42:39.012839 1838546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:42:39.027654 1838546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38105
	I1013 14:42:39.028147 1838546 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:42:39.028652 1838546 main.go:141] libmachine: Using API Version  1
	I1013 14:42:39.028674 1838546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:42:39.029034 1838546 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:42:39.029357 1838546 main.go:141] libmachine: (ha-632232-m03) Calling .GetState
	I1013 14:42:39.031509 1838546 status.go:371] ha-632232-m03 host status = "Running" (err=<nil>)
	I1013 14:42:39.031534 1838546 host.go:66] Checking if "ha-632232-m03" exists ...
	I1013 14:42:39.031952 1838546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:42:39.032015 1838546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:42:39.046119 1838546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40107
	I1013 14:42:39.046630 1838546 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:42:39.047298 1838546 main.go:141] libmachine: Using API Version  1
	I1013 14:42:39.047328 1838546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:42:39.047691 1838546 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:42:39.047968 1838546 main.go:141] libmachine: (ha-632232-m03) Calling .GetIP
	I1013 14:42:39.051262 1838546 main.go:141] libmachine: (ha-632232-m03) DBG | domain ha-632232-m03 has defined MAC address 52:54:00:1d:67:f3 in network mk-ha-632232
	I1013 14:42:39.051761 1838546 main.go:141] libmachine: (ha-632232-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:67:f3", ip: ""} in network mk-ha-632232: {Iface:virbr1 ExpiryTime:2025-10-13 15:38:45 +0000 UTC Type:0 Mac:52:54:00:1d:67:f3 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-632232-m03 Clientid:01:52:54:00:1d:67:f3}
	I1013 14:42:39.051788 1838546 main.go:141] libmachine: (ha-632232-m03) DBG | domain ha-632232-m03 has defined IP address 192.168.39.137 and MAC address 52:54:00:1d:67:f3 in network mk-ha-632232
	I1013 14:42:39.052002 1838546 host.go:66] Checking if "ha-632232-m03" exists ...
	I1013 14:42:39.052338 1838546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:42:39.052379 1838546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:42:39.067747 1838546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I1013 14:42:39.068400 1838546 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:42:39.068981 1838546 main.go:141] libmachine: Using API Version  1
	I1013 14:42:39.069010 1838546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:42:39.069391 1838546 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:42:39.069582 1838546 main.go:141] libmachine: (ha-632232-m03) Calling .DriverName
	I1013 14:42:39.069817 1838546 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 14:42:39.069841 1838546 main.go:141] libmachine: (ha-632232-m03) Calling .GetSSHHostname
	I1013 14:42:39.073678 1838546 main.go:141] libmachine: (ha-632232-m03) DBG | domain ha-632232-m03 has defined MAC address 52:54:00:1d:67:f3 in network mk-ha-632232
	I1013 14:42:39.074281 1838546 main.go:141] libmachine: (ha-632232-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:67:f3", ip: ""} in network mk-ha-632232: {Iface:virbr1 ExpiryTime:2025-10-13 15:38:45 +0000 UTC Type:0 Mac:52:54:00:1d:67:f3 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:ha-632232-m03 Clientid:01:52:54:00:1d:67:f3}
	I1013 14:42:39.074315 1838546 main.go:141] libmachine: (ha-632232-m03) DBG | domain ha-632232-m03 has defined IP address 192.168.39.137 and MAC address 52:54:00:1d:67:f3 in network mk-ha-632232
	I1013 14:42:39.074593 1838546 main.go:141] libmachine: (ha-632232-m03) Calling .GetSSHPort
	I1013 14:42:39.074813 1838546 main.go:141] libmachine: (ha-632232-m03) Calling .GetSSHKeyPath
	I1013 14:42:39.074997 1838546 main.go:141] libmachine: (ha-632232-m03) Calling .GetSSHUsername
	I1013 14:42:39.075137 1838546 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/ha-632232-m03/id_rsa Username:docker}
	I1013 14:42:39.162769 1838546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 14:42:39.186237 1838546 kubeconfig.go:125] found "ha-632232" server: "https://192.168.39.254:8443"
	I1013 14:42:39.186280 1838546 api_server.go:166] Checking apiserver status ...
	I1013 14:42:39.186327 1838546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 14:42:39.210288 1838546 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1555/cgroup
	W1013 14:42:39.224674 1838546 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1555/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 14:42:39.224771 1838546 ssh_runner.go:195] Run: ls
	I1013 14:42:39.230203 1838546 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1013 14:42:39.235870 1838546 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1013 14:42:39.235904 1838546 status.go:463] ha-632232-m03 apiserver status = Running (err=<nil>)
	I1013 14:42:39.235915 1838546 status.go:176] ha-632232-m03 status: &{Name:ha-632232-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 14:42:39.235960 1838546 status.go:174] checking status of ha-632232-m04 ...
	I1013 14:42:39.236475 1838546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:42:39.236531 1838546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:42:39.250949 1838546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I1013 14:42:39.251506 1838546 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:42:39.252027 1838546 main.go:141] libmachine: Using API Version  1
	I1013 14:42:39.252047 1838546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:42:39.252450 1838546 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:42:39.252675 1838546 main.go:141] libmachine: (ha-632232-m04) Calling .GetState
	I1013 14:42:39.254529 1838546 status.go:371] ha-632232-m04 host status = "Running" (err=<nil>)
	I1013 14:42:39.254551 1838546 host.go:66] Checking if "ha-632232-m04" exists ...
	I1013 14:42:39.254888 1838546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:42:39.254929 1838546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:42:39.269477 1838546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I1013 14:42:39.270013 1838546 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:42:39.270499 1838546 main.go:141] libmachine: Using API Version  1
	I1013 14:42:39.270528 1838546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:42:39.270933 1838546 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:42:39.271126 1838546 main.go:141] libmachine: (ha-632232-m04) Calling .GetIP
	I1013 14:42:39.274584 1838546 main.go:141] libmachine: (ha-632232-m04) DBG | domain ha-632232-m04 has defined MAC address 52:54:00:47:83:d4 in network mk-ha-632232
	I1013 14:42:39.275195 1838546 main.go:141] libmachine: (ha-632232-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:83:d4", ip: ""} in network mk-ha-632232: {Iface:virbr1 ExpiryTime:2025-10-13 15:40:24 +0000 UTC Type:0 Mac:52:54:00:47:83:d4 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-632232-m04 Clientid:01:52:54:00:47:83:d4}
	I1013 14:42:39.275242 1838546 main.go:141] libmachine: (ha-632232-m04) DBG | domain ha-632232-m04 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:83:d4 in network mk-ha-632232
	I1013 14:42:39.275416 1838546 host.go:66] Checking if "ha-632232-m04" exists ...
	I1013 14:42:39.275824 1838546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:42:39.275883 1838546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:42:39.290030 1838546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33953
	I1013 14:42:39.290507 1838546 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:42:39.290977 1838546 main.go:141] libmachine: Using API Version  1
	I1013 14:42:39.290995 1838546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:42:39.291316 1838546 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:42:39.291515 1838546 main.go:141] libmachine: (ha-632232-m04) Calling .DriverName
	I1013 14:42:39.291773 1838546 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 14:42:39.291800 1838546 main.go:141] libmachine: (ha-632232-m04) Calling .GetSSHHostname
	I1013 14:42:39.294890 1838546 main.go:141] libmachine: (ha-632232-m04) DBG | domain ha-632232-m04 has defined MAC address 52:54:00:47:83:d4 in network mk-ha-632232
	I1013 14:42:39.295392 1838546 main.go:141] libmachine: (ha-632232-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:83:d4", ip: ""} in network mk-ha-632232: {Iface:virbr1 ExpiryTime:2025-10-13 15:40:24 +0000 UTC Type:0 Mac:52:54:00:47:83:d4 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-632232-m04 Clientid:01:52:54:00:47:83:d4}
	I1013 14:42:39.295422 1838546 main.go:141] libmachine: (ha-632232-m04) DBG | domain ha-632232-m04 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:83:d4 in network mk-ha-632232
	I1013 14:42:39.295611 1838546 main.go:141] libmachine: (ha-632232-m04) Calling .GetSSHPort
	I1013 14:42:39.295795 1838546 main.go:141] libmachine: (ha-632232-m04) Calling .GetSSHKeyPath
	I1013 14:42:39.295993 1838546 main.go:141] libmachine: (ha-632232-m04) Calling .GetSSHUsername
	I1013 14:42:39.296155 1838546 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/ha-632232-m04/id_rsa Username:docker}
	I1013 14:42:39.385132 1838546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 14:42:39.407996 1838546 status.go:176] ha-632232-m04 status: &{Name:ha-632232-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (86.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (26.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 node start m02 --alsologtostderr -v 5
E1013 14:42:50.081988 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 node start m02 --alsologtostderr -v 5: (25.156619972s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5: (1.259025569s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (26.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.061705255s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 stop --alsologtostderr -v 5
E1013 14:45:06.221769 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:45:33.926345 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 stop --alsologtostderr -v 5: (3m57.643032156s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 start --wait true --alsologtostderr -v 5
E1013 14:47:20.513649 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 start --wait true --alsologtostderr -v 5: (2m8.078521491s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 node delete m03 --alsologtostderr -v 5: (7.341649271s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (247.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 stop --alsologtostderr -v 5
E1013 14:50:06.220182 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 14:52:20.513668 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 stop --alsologtostderr -v 5: (4m7.666700211s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5: exit status 7 (115.360099ms)

                                                
                                                
-- stdout --
	ha-632232
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-632232-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-632232-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 14:53:30.115634 1842058 out.go:360] Setting OutFile to fd 1 ...
	I1013 14:53:30.115946 1842058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:53:30.115958 1842058 out.go:374] Setting ErrFile to fd 2...
	I1013 14:53:30.115963 1842058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 14:53:30.116174 1842058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 14:53:30.116365 1842058 out.go:368] Setting JSON to false
	I1013 14:53:30.116399 1842058 mustload.go:65] Loading cluster: ha-632232
	I1013 14:53:30.116533 1842058 notify.go:220] Checking for updates...
	I1013 14:53:30.116984 1842058 config.go:182] Loaded profile config "ha-632232": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 14:53:30.117011 1842058 status.go:174] checking status of ha-632232 ...
	I1013 14:53:30.117577 1842058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:53:30.117640 1842058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:53:30.137695 1842058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I1013 14:53:30.138300 1842058 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:53:30.138863 1842058 main.go:141] libmachine: Using API Version  1
	I1013 14:53:30.138915 1842058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:53:30.139332 1842058 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:53:30.139539 1842058 main.go:141] libmachine: (ha-632232) Calling .GetState
	I1013 14:53:30.141449 1842058 status.go:371] ha-632232 host status = "Stopped" (err=<nil>)
	I1013 14:53:30.141465 1842058 status.go:384] host is not running, skipping remaining checks
	I1013 14:53:30.141470 1842058 status.go:176] ha-632232 status: &{Name:ha-632232 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 14:53:30.141503 1842058 status.go:174] checking status of ha-632232-m02 ...
	I1013 14:53:30.141842 1842058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:53:30.141896 1842058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:53:30.156106 1842058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I1013 14:53:30.156596 1842058 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:53:30.157095 1842058 main.go:141] libmachine: Using API Version  1
	I1013 14:53:30.157115 1842058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:53:30.157448 1842058 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:53:30.157649 1842058 main.go:141] libmachine: (ha-632232-m02) Calling .GetState
	I1013 14:53:30.159771 1842058 status.go:371] ha-632232-m02 host status = "Stopped" (err=<nil>)
	I1013 14:53:30.159790 1842058 status.go:384] host is not running, skipping remaining checks
	I1013 14:53:30.159799 1842058 status.go:176] ha-632232-m02 status: &{Name:ha-632232-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 14:53:30.159823 1842058 status.go:174] checking status of ha-632232-m04 ...
	I1013 14:53:30.160149 1842058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 14:53:30.160200 1842058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 14:53:30.174242 1842058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I1013 14:53:30.174798 1842058 main.go:141] libmachine: () Calling .GetVersion
	I1013 14:53:30.175306 1842058 main.go:141] libmachine: Using API Version  1
	I1013 14:53:30.175328 1842058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 14:53:30.175699 1842058 main.go:141] libmachine: () Calling .GetMachineName
	I1013 14:53:30.175948 1842058 main.go:141] libmachine: (ha-632232-m04) Calling .GetState
	I1013 14:53:30.177777 1842058 status.go:371] ha-632232-m04 host status = "Stopped" (err=<nil>)
	I1013 14:53:30.177794 1842058 status.go:384] host is not running, skipping remaining checks
	I1013 14:53:30.177802 1842058 status.go:176] ha-632232-m04 status: &{Name:ha-632232-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (247.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (101.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1013 14:55:06.218112 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m40.605297259s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (101.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 node add --control-plane --alsologtostderr -v 5
E1013 14:55:23.591897 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-632232 node add --control-plane --alsologtostderr -v 5: (1m15.456641702s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-632232 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1013 14:56:29.287930 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-567388 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1013 14:57:20.513942 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-567388 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m22.550154573s)
--- PASS: TestJSONOutput/start/Command (82.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-567388 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-567388 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-567388 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-567388 --output=json --user=testUser: (6.995737628s)
--- PASS: TestJSONOutput/stop/Command (7.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-795728 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-795728 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.344821ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"325674b9-f3db-48da-bd79-243ab893392e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-795728] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3b8754c-b752-41e1-87a0-a9be7356b6e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"6d1c01de-3420-43b1-af85-768aa70431e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d277a62b-7034-4cb0-8365-2bffe833a49e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig"}}
	{"specversion":"1.0","id":"3101beef-93d4-4a89-9157-e6a1a879d8ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube"}}
	{"specversion":"1.0","id":"f4903359-7321-47fc-854c-0999d96a5dba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ed477590-b589-4e86-bbe0-4e78121d9aca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f0aa6692-4a6c-4a8d-b7cb-61af9c965ba3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-795728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-795728
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (84.77s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-523884 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-523884 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (39.409887438s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-526189 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-526189 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (42.695917796s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-523884
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-526189
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-526189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-526189
helpers_test.go:175: Cleaning up "first-523884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-523884
--- PASS: TestMinikubeProfile (84.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-099677 --memory=3072 --mount-string /tmp/TestMountStartserial168728937/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-099677 --memory=3072 --mount-string /tmp/TestMountStartserial168728937/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (21.043307075s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-099677 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-099677 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-115627 --memory=3072 --mount-string /tmp/TestMountStartserial168728937/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1013 15:00:06.222320 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-115627 --memory=3072 --mount-string /tmp/TestMountStartserial168728937/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (23.459998662s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-115627 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-115627 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-099677 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-115627 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-115627 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-115627
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-115627: (1.226600623s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-115627
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-115627: (19.09533828s)
--- PASS: TestMountStart/serial/RestartStopped (20.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-115627 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-115627 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-864755 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1013 15:02:20.513549 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-864755 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m44.137562796s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-864755 -- rollout status deployment/busybox: (2.347218687s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- exec busybox-7b57f96db7-4gfxm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- exec busybox-7b57f96db7-8bs85 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- exec busybox-7b57f96db7-4gfxm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- exec busybox-7b57f96db7-8bs85 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- exec busybox-7b57f96db7-4gfxm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- exec busybox-7b57f96db7-8bs85 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- exec busybox-7b57f96db7-4gfxm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- exec busybox-7b57f96db7-4gfxm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- exec busybox-7b57f96db7-8bs85 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-864755 -- exec busybox-7b57f96db7-8bs85 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-864755 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-864755 -v=5 --alsologtostderr: (41.440131262s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.05s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-864755 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp testdata/cp-test.txt multinode-864755:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp multinode-864755:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile860272781/001/cp-test_multinode-864755.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp multinode-864755:/home/docker/cp-test.txt multinode-864755-m02:/home/docker/cp-test_multinode-864755_multinode-864755-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m02 "sudo cat /home/docker/cp-test_multinode-864755_multinode-864755-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp multinode-864755:/home/docker/cp-test.txt multinode-864755-m03:/home/docker/cp-test_multinode-864755_multinode-864755-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m03 "sudo cat /home/docker/cp-test_multinode-864755_multinode-864755-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp testdata/cp-test.txt multinode-864755-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp multinode-864755-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile860272781/001/cp-test_multinode-864755-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp multinode-864755-m02:/home/docker/cp-test.txt multinode-864755:/home/docker/cp-test_multinode-864755-m02_multinode-864755.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755 "sudo cat /home/docker/cp-test_multinode-864755-m02_multinode-864755.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp multinode-864755-m02:/home/docker/cp-test.txt multinode-864755-m03:/home/docker/cp-test_multinode-864755-m02_multinode-864755-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m03 "sudo cat /home/docker/cp-test_multinode-864755-m02_multinode-864755-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp testdata/cp-test.txt multinode-864755-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp multinode-864755-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile860272781/001/cp-test_multinode-864755-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp multinode-864755-m03:/home/docker/cp-test.txt multinode-864755:/home/docker/cp-test_multinode-864755-m03_multinode-864755.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755 "sudo cat /home/docker/cp-test_multinode-864755-m03_multinode-864755.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 cp multinode-864755-m03:/home/docker/cp-test.txt multinode-864755-m02:/home/docker/cp-test_multinode-864755-m03_multinode-864755-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 ssh -n multinode-864755-m02 "sudo cat /home/docker/cp-test_multinode-864755-m03_multinode-864755-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-864755 node stop m03: (1.425660912s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-864755 status: exit status 7 (451.968372ms)

                                                
                                                
-- stdout --
	multinode-864755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-864755-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-864755-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-864755 status --alsologtostderr: exit status 7 (468.640968ms)

                                                
                                                
-- stdout --
	multinode-864755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-864755-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-864755-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 15:03:21.598626 1849344 out.go:360] Setting OutFile to fd 1 ...
	I1013 15:03:21.598924 1849344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:03:21.598936 1849344 out.go:374] Setting ErrFile to fd 2...
	I1013 15:03:21.598940 1849344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:03:21.599160 1849344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 15:03:21.599351 1849344 out.go:368] Setting JSON to false
	I1013 15:03:21.599384 1849344 mustload.go:65] Loading cluster: multinode-864755
	I1013 15:03:21.599432 1849344 notify.go:220] Checking for updates...
	I1013 15:03:21.599961 1849344 config.go:182] Loaded profile config "multinode-864755": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:03:21.599987 1849344 status.go:174] checking status of multinode-864755 ...
	I1013 15:03:21.600592 1849344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:03:21.600651 1849344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:03:21.616241 1849344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46575
	I1013 15:03:21.616829 1849344 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:03:21.617572 1849344 main.go:141] libmachine: Using API Version  1
	I1013 15:03:21.617600 1849344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:03:21.618111 1849344 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:03:21.618398 1849344 main.go:141] libmachine: (multinode-864755) Calling .GetState
	I1013 15:03:21.620605 1849344 status.go:371] multinode-864755 host status = "Running" (err=<nil>)
	I1013 15:03:21.620633 1849344 host.go:66] Checking if "multinode-864755" exists ...
	I1013 15:03:21.621007 1849344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:03:21.621085 1849344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:03:21.635445 1849344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
	I1013 15:03:21.635984 1849344 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:03:21.636622 1849344 main.go:141] libmachine: Using API Version  1
	I1013 15:03:21.636651 1849344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:03:21.636976 1849344 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:03:21.637219 1849344 main.go:141] libmachine: (multinode-864755) Calling .GetIP
	I1013 15:03:21.640529 1849344 main.go:141] libmachine: (multinode-864755) DBG | domain multinode-864755 has defined MAC address 52:54:00:64:61:14 in network mk-multinode-864755
	I1013 15:03:21.641064 1849344 main.go:141] libmachine: (multinode-864755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:14", ip: ""} in network mk-multinode-864755: {Iface:virbr1 ExpiryTime:2025-10-13 16:00:54 +0000 UTC Type:0 Mac:52:54:00:64:61:14 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:multinode-864755 Clientid:01:52:54:00:64:61:14}
	I1013 15:03:21.641112 1849344 main.go:141] libmachine: (multinode-864755) DBG | domain multinode-864755 has defined IP address 192.168.39.7 and MAC address 52:54:00:64:61:14 in network mk-multinode-864755
	I1013 15:03:21.641280 1849344 host.go:66] Checking if "multinode-864755" exists ...
	I1013 15:03:21.641571 1849344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:03:21.641608 1849344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:03:21.656635 1849344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43119
	I1013 15:03:21.657210 1849344 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:03:21.657749 1849344 main.go:141] libmachine: Using API Version  1
	I1013 15:03:21.657773 1849344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:03:21.658208 1849344 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:03:21.658443 1849344 main.go:141] libmachine: (multinode-864755) Calling .DriverName
	I1013 15:03:21.658654 1849344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 15:03:21.658685 1849344 main.go:141] libmachine: (multinode-864755) Calling .GetSSHHostname
	I1013 15:03:21.662196 1849344 main.go:141] libmachine: (multinode-864755) DBG | domain multinode-864755 has defined MAC address 52:54:00:64:61:14 in network mk-multinode-864755
	I1013 15:03:21.662719 1849344 main.go:141] libmachine: (multinode-864755) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:14", ip: ""} in network mk-multinode-864755: {Iface:virbr1 ExpiryTime:2025-10-13 16:00:54 +0000 UTC Type:0 Mac:52:54:00:64:61:14 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:multinode-864755 Clientid:01:52:54:00:64:61:14}
	I1013 15:03:21.662747 1849344 main.go:141] libmachine: (multinode-864755) DBG | domain multinode-864755 has defined IP address 192.168.39.7 and MAC address 52:54:00:64:61:14 in network mk-multinode-864755
	I1013 15:03:21.662947 1849344 main.go:141] libmachine: (multinode-864755) Calling .GetSSHPort
	I1013 15:03:21.663191 1849344 main.go:141] libmachine: (multinode-864755) Calling .GetSSHKeyPath
	I1013 15:03:21.663373 1849344 main.go:141] libmachine: (multinode-864755) Calling .GetSSHUsername
	I1013 15:03:21.663558 1849344 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/multinode-864755/id_rsa Username:docker}
	I1013 15:03:21.752448 1849344 ssh_runner.go:195] Run: systemctl --version
	I1013 15:03:21.760776 1849344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 15:03:21.778592 1849344 kubeconfig.go:125] found "multinode-864755" server: "https://192.168.39.7:8443"
	I1013 15:03:21.778638 1849344 api_server.go:166] Checking apiserver status ...
	I1013 15:03:21.778681 1849344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 15:03:21.799295 1849344 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1445/cgroup
	W1013 15:03:21.813582 1849344 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1445/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 15:03:21.813643 1849344 ssh_runner.go:195] Run: ls
	I1013 15:03:21.819400 1849344 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I1013 15:03:21.825890 1849344 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I1013 15:03:21.825920 1849344 status.go:463] multinode-864755 apiserver status = Running (err=<nil>)
	I1013 15:03:21.825931 1849344 status.go:176] multinode-864755 status: &{Name:multinode-864755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 15:03:21.825948 1849344 status.go:174] checking status of multinode-864755-m02 ...
	I1013 15:03:21.826264 1849344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:03:21.826321 1849344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:03:21.841382 1849344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39969
	I1013 15:03:21.841888 1849344 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:03:21.842372 1849344 main.go:141] libmachine: Using API Version  1
	I1013 15:03:21.842392 1849344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:03:21.842758 1849344 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:03:21.842934 1849344 main.go:141] libmachine: (multinode-864755-m02) Calling .GetState
	I1013 15:03:21.845080 1849344 status.go:371] multinode-864755-m02 host status = "Running" (err=<nil>)
	I1013 15:03:21.845098 1849344 host.go:66] Checking if "multinode-864755-m02" exists ...
	I1013 15:03:21.845396 1849344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:03:21.845436 1849344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:03:21.860325 1849344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34681
	I1013 15:03:21.860769 1849344 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:03:21.861334 1849344 main.go:141] libmachine: Using API Version  1
	I1013 15:03:21.861364 1849344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:03:21.861762 1849344 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:03:21.862006 1849344 main.go:141] libmachine: (multinode-864755-m02) Calling .GetIP
	I1013 15:03:21.865295 1849344 main.go:141] libmachine: (multinode-864755-m02) DBG | domain multinode-864755-m02 has defined MAC address 52:54:00:79:49:1b in network mk-multinode-864755
	I1013 15:03:21.865824 1849344 main.go:141] libmachine: (multinode-864755-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:49:1b", ip: ""} in network mk-multinode-864755: {Iface:virbr1 ExpiryTime:2025-10-13 16:01:51 +0000 UTC Type:0 Mac:52:54:00:79:49:1b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-864755-m02 Clientid:01:52:54:00:79:49:1b}
	I1013 15:03:21.865878 1849344 main.go:141] libmachine: (multinode-864755-m02) DBG | domain multinode-864755-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:79:49:1b in network mk-multinode-864755
	I1013 15:03:21.866111 1849344 host.go:66] Checking if "multinode-864755-m02" exists ...
	I1013 15:03:21.866546 1849344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:03:21.866603 1849344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:03:21.881398 1849344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I1013 15:03:21.881943 1849344 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:03:21.882497 1849344 main.go:141] libmachine: Using API Version  1
	I1013 15:03:21.882516 1849344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:03:21.882899 1849344 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:03:21.883177 1849344 main.go:141] libmachine: (multinode-864755-m02) Calling .DriverName
	I1013 15:03:21.883463 1849344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 15:03:21.883503 1849344 main.go:141] libmachine: (multinode-864755-m02) Calling .GetSSHHostname
	I1013 15:03:21.887200 1849344 main.go:141] libmachine: (multinode-864755-m02) DBG | domain multinode-864755-m02 has defined MAC address 52:54:00:79:49:1b in network mk-multinode-864755
	I1013 15:03:21.887730 1849344 main.go:141] libmachine: (multinode-864755-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:49:1b", ip: ""} in network mk-multinode-864755: {Iface:virbr1 ExpiryTime:2025-10-13 16:01:51 +0000 UTC Type:0 Mac:52:54:00:79:49:1b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:multinode-864755-m02 Clientid:01:52:54:00:79:49:1b}
	I1013 15:03:21.887787 1849344 main.go:141] libmachine: (multinode-864755-m02) DBG | domain multinode-864755-m02 has defined IP address 192.168.39.71 and MAC address 52:54:00:79:49:1b in network mk-multinode-864755
	I1013 15:03:21.887965 1849344 main.go:141] libmachine: (multinode-864755-m02) Calling .GetSSHPort
	I1013 15:03:21.888158 1849344 main.go:141] libmachine: (multinode-864755-m02) Calling .GetSSHKeyPath
	I1013 15:03:21.888321 1849344 main.go:141] libmachine: (multinode-864755-m02) Calling .GetSSHUsername
	I1013 15:03:21.888500 1849344 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-1810975/.minikube/machines/multinode-864755-m02/id_rsa Username:docker}
	I1013 15:03:21.973794 1849344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 15:03:21.991414 1849344 status.go:176] multinode-864755-m02 status: &{Name:multinode-864755-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1013 15:03:21.991455 1849344 status.go:174] checking status of multinode-864755-m03 ...
	I1013 15:03:21.991839 1849344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:03:21.991894 1849344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:03:22.007636 1849344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37791
	I1013 15:03:22.008255 1849344 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:03:22.008887 1849344 main.go:141] libmachine: Using API Version  1
	I1013 15:03:22.008914 1849344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:03:22.009421 1849344 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:03:22.009659 1849344 main.go:141] libmachine: (multinode-864755-m03) Calling .GetState
	I1013 15:03:22.011745 1849344 status.go:371] multinode-864755-m03 host status = "Stopped" (err=<nil>)
	I1013 15:03:22.011761 1849344 status.go:384] host is not running, skipping remaining checks
	I1013 15:03:22.011767 1849344 status.go:176] multinode-864755-m03 status: &{Name:multinode-864755-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-864755 node start m03 -v=5 --alsologtostderr: (34.748974756s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (342.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-864755
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-864755
E1013 15:05:06.217085 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-864755: (2m51.730275456s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-864755 --wait=true -v=5 --alsologtostderr
E1013 15:07:20.515079 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-864755 --wait=true -v=5 --alsologtostderr: (2m50.929027041s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-864755
--- PASS: TestMultiNode/serial/RestartKeepsNodes (342.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-864755 node delete m03: (1.646751776s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (173.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 stop
E1013 15:10:06.222034 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:12:03.596442 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:12:20.516494 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-864755 stop: (2m53.078814442s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-864755 status: exit status 7 (98.882403ms)

                                                
                                                
-- stdout --
	multinode-864755
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-864755-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-864755 status --alsologtostderr: exit status 7 (89.727759ms)

                                                
                                                
-- stdout --
	multinode-864755
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-864755-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 15:12:35.696116 1852127 out.go:360] Setting OutFile to fd 1 ...
	I1013 15:12:35.696438 1852127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:12:35.696446 1852127 out.go:374] Setting ErrFile to fd 2...
	I1013 15:12:35.696452 1852127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:12:35.697028 1852127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 15:12:35.697286 1852127 out.go:368] Setting JSON to false
	I1013 15:12:35.697325 1852127 mustload.go:65] Loading cluster: multinode-864755
	I1013 15:12:35.697365 1852127 notify.go:220] Checking for updates...
	I1013 15:12:35.697748 1852127 config.go:182] Loaded profile config "multinode-864755": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:12:35.697767 1852127 status.go:174] checking status of multinode-864755 ...
	I1013 15:12:35.698208 1852127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:12:35.698287 1852127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:12:35.713048 1852127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
	I1013 15:12:35.713611 1852127 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:12:35.714286 1852127 main.go:141] libmachine: Using API Version  1
	I1013 15:12:35.714319 1852127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:12:35.714705 1852127 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:12:35.714928 1852127 main.go:141] libmachine: (multinode-864755) Calling .GetState
	I1013 15:12:35.716871 1852127 status.go:371] multinode-864755 host status = "Stopped" (err=<nil>)
	I1013 15:12:35.716886 1852127 status.go:384] host is not running, skipping remaining checks
	I1013 15:12:35.716892 1852127 status.go:176] multinode-864755 status: &{Name:multinode-864755 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 15:12:35.716913 1852127 status.go:174] checking status of multinode-864755-m02 ...
	I1013 15:12:35.717242 1852127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1013 15:12:35.717320 1852127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 15:12:35.731230 1852127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40983
	I1013 15:12:35.731697 1852127 main.go:141] libmachine: () Calling .GetVersion
	I1013 15:12:35.732162 1852127 main.go:141] libmachine: Using API Version  1
	I1013 15:12:35.732183 1852127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 15:12:35.732543 1852127 main.go:141] libmachine: () Calling .GetMachineName
	I1013 15:12:35.732733 1852127 main.go:141] libmachine: (multinode-864755-m02) Calling .GetState
	I1013 15:12:35.734763 1852127 status.go:371] multinode-864755-m02 host status = "Stopped" (err=<nil>)
	I1013 15:12:35.734778 1852127 status.go:384] host is not running, skipping remaining checks
	I1013 15:12:35.734783 1852127 status.go:176] multinode-864755-m02 status: &{Name:multinode-864755-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (173.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-864755 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1013 15:13:09.289587 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-864755 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m19.255841052s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-864755 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-864755
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-864755-m02 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-864755-m02 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 14 (71.746911ms)

                                                
                                                
-- stdout --
	* [multinode-864755-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-864755-m02' is duplicated with machine name 'multinode-864755-m02' in profile 'multinode-864755'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-864755-m03 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-864755-m03 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (40.609010342s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-864755
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-864755: exit status 80 (243.750058ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-864755 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-864755-m03 already exists in multinode-864755-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-864755-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.72s)

                                                
                                    
x
+
TestPreload (120.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-274176 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.32.0
E1013 15:15:06.218056 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-274176 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m6.24536481s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-274176 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-274176 image pull gcr.io/k8s-minikube/busybox: (1.837416565s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-274176
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-274176: (6.710855584s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-274176 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-274176 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (44.309988557s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-274176 image list
helpers_test.go:175: Cleaning up "test-preload-274176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-274176
--- PASS: TestPreload (120.12s)

                                                
                                    
x
+
TestScheduledStopUnix (112.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-587255 --memory=3072 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-587255 --memory=3072 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (40.819530225s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-587255 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-587255 -n scheduled-stop-587255
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-587255 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1013 15:17:19.946346 1814927 retry.go:31] will retry after 99.408µs: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.947584 1814927 retry.go:31] will retry after 205.14µs: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.948696 1814927 retry.go:31] will retry after 214.691µs: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.949870 1814927 retry.go:31] will retry after 182.785µs: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.951011 1814927 retry.go:31] will retry after 467.065µs: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.952169 1814927 retry.go:31] will retry after 499.776µs: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.953309 1814927 retry.go:31] will retry after 1.437726ms: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.955521 1814927 retry.go:31] will retry after 2.451326ms: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.958732 1814927 retry.go:31] will retry after 1.900317ms: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.960967 1814927 retry.go:31] will retry after 4.969008ms: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.966181 1814927 retry.go:31] will retry after 3.495353ms: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.970449 1814927 retry.go:31] will retry after 7.40115ms: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.978689 1814927 retry.go:31] will retry after 6.625921ms: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:19.986014 1814927 retry.go:31] will retry after 18.715807ms: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:20.005339 1814927 retry.go:31] will retry after 15.193464ms: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
I1013 15:17:20.021678 1814927 retry.go:31] will retry after 49.515369ms: open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/scheduled-stop-587255/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-587255 --cancel-scheduled
E1013 15:17:20.514149 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-587255 -n scheduled-stop-587255
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-587255
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-587255 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-587255
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-587255: exit status 7 (79.691261ms)

                                                
                                                
-- stdout --
	scheduled-stop-587255
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-587255 -n scheduled-stop-587255
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-587255 -n scheduled-stop-587255: exit status 7 (73.183624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-587255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-587255
--- PASS: TestScheduledStopUnix (112.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (121.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2211064365 start -p running-upgrade-993501 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2211064365 start -p running-upgrade-993501 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m30.732306517s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-993501 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-993501 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (28.592868143s)
helpers_test.go:175: Cleaning up "running-upgrade-993501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-993501
--- PASS: TestRunningBinaryUpgrade (121.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (124.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-406324 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-406324 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (44.884626855s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-406324
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-406324: (1.785655827s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-406324 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-406324 status --format={{.Host}}: exit status 7 (79.927473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-406324 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-406324 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (48.493968515s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-406324 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-406324 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-406324 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 106 (122.086205ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-406324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-406324
	    minikube start -p kubernetes-upgrade-406324 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4063242 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-406324 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-406324 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-406324 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (28.084567074s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-406324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-406324
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-406324: (1.169895548s)
--- PASS: TestKubernetesUpgrade (124.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-437394 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-437394 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 14 (95.517612ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-437394] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (108.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-437394 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-437394 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m48.289107985s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-437394 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (108.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-045564 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-045564 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 14 (121.294836ms)

                                                
                                                
-- stdout --
	* [false-045564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 15:20:02.936677 1857626 out.go:360] Setting OutFile to fd 1 ...
	I1013 15:20:02.936991 1857626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:20:02.937002 1857626 out.go:374] Setting ErrFile to fd 2...
	I1013 15:20:02.937007 1857626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 15:20:02.937226 1857626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-1810975/.minikube/bin
	I1013 15:20:02.937753 1857626 out.go:368] Setting JSON to false
	I1013 15:20:02.938860 1857626 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":25351,"bootTime":1760343452,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 15:20:02.938973 1857626 start.go:141] virtualization: kvm guest
	I1013 15:20:02.941043 1857626 out.go:179] * [false-045564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 15:20:02.942433 1857626 notify.go:220] Checking for updates...
	I1013 15:20:02.942454 1857626 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 15:20:02.943806 1857626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 15:20:02.945370 1857626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-1810975/kubeconfig
	I1013 15:20:02.946917 1857626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-1810975/.minikube
	I1013 15:20:02.948404 1857626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 15:20:02.949904 1857626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 15:20:02.951773 1857626 config.go:182] Loaded profile config "NoKubernetes-437394": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:20:02.951867 1857626 config.go:182] Loaded profile config "cert-expiration-407214": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:20:02.951964 1857626 config.go:182] Loaded profile config "cert-options-740924": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1013 15:20:02.952083 1857626 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 15:20:02.993005 1857626 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 15:20:02.994404 1857626 start.go:305] selected driver: kvm2
	I1013 15:20:02.994427 1857626 start.go:925] validating driver "kvm2" against <nil>
	I1013 15:20:02.994439 1857626 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 15:20:02.996525 1857626 out.go:203] 
	W1013 15:20:02.997806 1857626 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1013 15:20:02.999024 1857626 out.go:203] 

                                                
                                                
** /stderr **
E1013 15:20:06.217450 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:88: 
----------------------- debugLogs start: false-045564 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-045564" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-045564

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-045564"

                                                
                                                
----------------------- debugLogs end: false-045564 [took: 3.612759292s] --------------------------------
helpers_test.go:175: Cleaning up "false-045564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-045564
--- PASS: TestNetworkPlugins/group/false (3.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (55.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-437394 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-437394 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (54.512668662s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-437394 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-437394 status -o json: exit status 2 (328.956051ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-437394","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-437394
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (55.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-437394 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-437394 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (33.498492097s)
--- PASS: TestNoKubernetes/serial/Start (33.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-437394 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-437394 "sudo systemctl is-active --quiet service kubelet": exit status 1 (220.131298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (22.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (20.429478918s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.660235424s)
--- PASS: TestNoKubernetes/serial/ProfileList (22.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-437394
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-437394: (1.407180534s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (114.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3151935994 start -p stopped-upgrade-189297 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3151935994 start -p stopped-upgrade-189297 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m2.328136261s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3151935994 -p stopped-upgrade-189297 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3151935994 -p stopped-upgrade-189297 stop: (1.415861216s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-189297 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-189297 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (50.475532157s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (114.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (38.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-437394 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1013 15:22:20.513799 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-437394 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (38.994606023s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (38.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-437394 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-437394 "sudo systemctl is-active --quiet service kubelet": exit status 1 (226.203429ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestPause/serial/Start (87.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-383347 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-383347 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m27.374835376s)
--- PASS: TestPause/serial/Start (87.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (68.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m8.866929641s)
--- PASS: TestNetworkPlugins/group/auto/Start (68.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-189297
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-189297: (1.390084312s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m5.796934637s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-045564 "pgrep -a kubelet"
I1013 15:24:14.602441 1814927 config.go:182] Loaded profile config "auto-045564": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-045564 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kmp5n" [d6946dde-7b22-47b5-84b7-a35fde5076cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kmp5n" [d6946dde-7b22-47b5-84b7-a35fde5076cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.02267839s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (67.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-383347 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-383347 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m7.568266968s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (67.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-045564 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1013 15:25:06.217358 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m23.800218066s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-jwh7m" [b753855d-b5c3-4beb-93ea-a5935fa95582] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005203629s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-045564 "pgrep -a kubelet"
I1013 15:25:21.711204 1814927 config.go:182] Loaded profile config "kindnet-045564": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-045564 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qbt25" [a66f3ad9-5c53-467b-b2ff-c4ae414173bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qbt25" [a66f3ad9-5c53-467b-b2ff-c4ae414173bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005539953s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.92s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-383347 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-383347 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-383347 --output=json --layout=cluster: exit status 2 (322.070224ms)

                                                
                                                
-- stdout --
	{"Name":"pause-383347","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-383347","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-383347 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-383347 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-383347 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.09s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.090086617s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-045564 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m25.135066261s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (87.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m27.068864324s)
--- PASS: TestNetworkPlugins/group/flannel/Start (87.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-045564 "pgrep -a kubelet"
I1013 15:26:06.939179 1814927 config.go:182] Loaded profile config "custom-flannel-045564": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-045564 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kgfrj" [ec26900a-06f2-40b1-87b6-ca2d9d68a14b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kgfrj" [ec26900a-06f2-40b1-87b6-ca2d9d68a14b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005129938s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-045564 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-045564 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m24.751788998s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-045564 "pgrep -a kubelet"
I1013 15:27:01.266192 1814927 config.go:182] Loaded profile config "enable-default-cni-045564": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-045564 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9rw2q" [8871ab7f-44d5-4abe-a54a-d41462fa1055] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9rw2q" [8871ab7f-44d5-4abe-a54a-d41462fa1055] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004634604s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-045564 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-xr9x7" [cf9d694d-3e89-4287-bf46-7b31c7d6ee84] Running
E1013 15:27:20.514135 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005011566s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-045564 "pgrep -a kubelet"
I1013 15:27:24.198261 1814927 config.go:182] Loaded profile config "flannel-045564": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-045564 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g2t6f" [87bdba42-b773-45a2-ae29-8364083d4751] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g2t6f" [87bdba42-b773-45a2-ae29-8364083d4751] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006066869s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (98.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-316150 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-316150 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m38.244129392s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (98.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-045564 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (83.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-673307 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-673307 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m23.474624333s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (83.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-045564 "pgrep -a kubelet"
I1013 15:28:01.487886 1814927 config.go:182] Loaded profile config "bridge-045564": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-045564 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-66dg9" [4fd700ea-fd79-4206-a40b-2dd1610e097f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-66dg9" [4fd700ea-fd79-4206-a40b-2dd1610e097f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005498522s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-045564 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-045564 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-516717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
E1013 15:28:43.598201 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-516717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m30.800110825s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-316150 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c9708474-cd16-43ba-b394-85c320548022] Pending
helpers_test.go:352: "busybox" [c9708474-cd16-43ba-b394-85c320548022] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c9708474-cd16-43ba-b394-85c320548022] Running
E1013 15:29:14.882829 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:29:14.889299 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:29:14.900797 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:29:14.922309 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:29:14.963888 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:29:15.045871 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:29:15.207506 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:29:15.529372 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004271433s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-316150 exec busybox -- /bin/sh -c "ulimit -n"
E1013 15:29:17.453847 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-673307 create -f testdata/busybox.yaml
E1013 15:29:16.171412 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f795ac56-8daf-4cba-93ec-2e0566992989] Pending
helpers_test.go:352: "busybox" [f795ac56-8daf-4cba-93ec-2e0566992989] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f795ac56-8daf-4cba-93ec-2e0566992989] Running
E1013 15:29:20.015807 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005713246s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-673307 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-316150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-316150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.310762256s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-316150 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (89.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-316150 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-316150 --alsologtostderr -v=3: (1m29.498142973s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (89.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-673307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1013 15:29:25.137586 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-673307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.144199658s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-673307 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (87.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-673307 --alsologtostderr -v=3
E1013 15:29:35.379220 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:29:49.291754 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:29:55.861228 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-673307 --alsologtostderr -v=3: (1m27.169489571s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (87.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-516717 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0d0b71ae-5ab9-46ee-8430-f2f6173825cc] Pending
helpers_test.go:352: "busybox" [0d0b71ae-5ab9-46ee-8430-f2f6173825cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0d0b71ae-5ab9-46ee-8430-f2f6173825cc] Running
E1013 15:30:06.217165 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/functional-608191/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004907655s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-516717 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-516717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-516717 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (83.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-516717 --alsologtostderr -v=3
E1013 15:30:15.272762 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:15.279209 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:15.290768 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:15.312244 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:15.353832 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:15.435406 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:15.597131 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:15.918925 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:16.560977 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:17.842632 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:20.404825 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:25.526892 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:35.768241 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:30:36.823182 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-516717 --alsologtostderr -v=3: (1m23.607472622s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (83.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-316150 -n old-k8s-version-316150
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-316150 -n old-k8s-version-316150: exit status 7 (90.718321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-316150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (42.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-316150 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-316150 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.28.0: (42.122511336s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-316150 -n old-k8s-version-316150
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (42.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673307 -n no-preload-673307
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673307 -n no-preload-673307: exit status 7 (79.388815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-673307 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-673307 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
E1013 15:30:56.249861 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:07.249041 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:07.256108 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:07.268404 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:07.289920 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:07.331793 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:07.413959 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:07.575673 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:07.897290 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:08.538950 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:09.821310 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:12.383399 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:17.504851 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:31:27.746774 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-673307 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (54.900521478s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-673307 -n no-preload-673307
E1013 15:31:48.228893 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/custom-flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-516717 -n embed-certs-516717
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-516717 -n embed-certs-516717: exit status 7 (94.292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-516717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-516717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
E1013 15:31:37.211373 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/kindnet-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-516717 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (45.3875272s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-516717 -n embed-certs-516717
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m29.333327408s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (124.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-c5cw9" [3c77287c-8148-47b6-a144-a38a1c954408] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-c5cw9" [3c77287c-8148-47b6-a144-a38a1c954408] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 2m4.004675549s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-316150 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (124.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-426789 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1cc08d3e-5356-425c-89d0-b336c23ec58c] Pending
helpers_test.go:352: "busybox" [1cc08d3e-5356-425c-89d0-b336c23ec58c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1013 15:42:01.537144 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/enable-default-cni-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [1cc08d3e-5356-425c-89d0-b336c23ec58c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.009582757s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-426789 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-426789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-426789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028164988s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-426789 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (73.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-426789 --alsologtostderr -v=3
E1013 15:42:17.919064 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/flannel-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:42:20.513598 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/addons-214022/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-426789 --alsologtostderr -v=3: (1m13.537222699s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (73.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-316150 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-316150 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-316150 -n old-k8s-version-316150
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-316150 -n old-k8s-version-316150: exit status 2 (262.640976ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-316150 -n old-k8s-version-316150
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-316150 -n old-k8s-version-316150: exit status 2 (263.116982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-316150 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-316150 -n old-k8s-version-316150
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-316150 -n old-k8s-version-316150
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-400509 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
E1013 15:43:01.800674 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/bridge-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-400509 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (49.881058161s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789: exit status 7 (83.231213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-426789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (42.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-426789 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (42.131060616s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (42.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-400509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-400509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.370460264s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-400509 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-400509 --alsologtostderr -v=3: (2.381179526s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-400509 -n newest-cni-400509
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-400509 -n newest-cni-400509: exit status 7 (79.219735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-400509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-400509 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-400509 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (38.503032437s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-400509 -n newest-cni-400509
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-400509 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-400509 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-400509 --alsologtostderr -v=1: (1.078485198s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-400509 -n newest-cni-400509
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-400509 -n newest-cni-400509: exit status 2 (281.331708ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-400509 -n newest-cni-400509
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-400509 -n newest-cni-400509: exit status 2 (288.453773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-400509 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-400509 -n newest-cni-400509
E1013 15:44:18.620313 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-400509 -n newest-cni-400509
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-673307 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-673307 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673307 -n no-preload-673307
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673307 -n no-preload-673307: exit status 2 (279.381126ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-673307 -n no-preload-673307
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-673307 -n no-preload-673307: exit status 2 (273.746801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-673307 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-673307 -n no-preload-673307
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-673307 -n no-preload-673307
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-516717 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-516717 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-516717 -n embed-certs-516717
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-516717 -n embed-certs-516717: exit status 2 (273.593125ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-516717 -n embed-certs-516717
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-516717 -n embed-certs-516717: exit status 2 (274.303708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-516717 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-516717 -n embed-certs-516717
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-516717 -n embed-certs-516717
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (102.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z6wz8" [c1d2745a-8b1e-4dd7-878e-d4822a3f956d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1013 15:54:08.365305 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/old-k8s-version-316150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:14.883006 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/auto-045564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:16.264323 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:16.270789 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:16.282316 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:16.303823 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:16.345307 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:16.426839 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:16.588462 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:16.910227 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:17.552193 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:18.834222 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:21.395659 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:26.517473 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 15:54:36.759465 1814927 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-1810975/.minikube/profiles/no-preload-673307/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z6wz8" [c1d2745a-8b1e-4dd7-878e-d4822a3f956d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 1m42.005315629s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-426789 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (102.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-426789 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-426789 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789: exit status 2 (279.58473ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789: exit status 2 (274.593581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-426789 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-426789 -n default-k8s-diff-port-426789
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.87s)

                                                
                                    

Test skip (39/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 5.56
267 TestNetworkPlugins/group/cilium 3.96
282 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-045564 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-045564" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-045564

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-045564"

                                                
                                                
----------------------- debugLogs end: kubenet-045564 [took: 5.396350884s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-045564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-045564
--- SKIP: TestNetworkPlugins/group/kubenet (5.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-045564 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-045564" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-045564

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-045564" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-045564"

                                                
                                                
----------------------- debugLogs end: cilium-045564 [took: 3.793067832s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-045564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-045564
--- SKIP: TestNetworkPlugins/group/cilium (3.96s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-917680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-917680
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard